00:00:00.001 Started by upstream project "autotest-per-patch" build number 132034 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.103 Using shallow fetch with depth 1 00:00:00.103 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.103 > git --version # timeout=10 00:00:00.158 > git --version # 'git version 2.39.2' 00:00:00.158 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.206 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.206 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.980 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.994 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.008 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:04.009 > git config core.sparsecheckout # timeout=10 00:00:04.021 > git read-tree -mu HEAD # timeout=10 00:00:04.039 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:04.058 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:04.058 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:04.167 [Pipeline] Start of Pipeline 00:00:04.182 [Pipeline] library 00:00:04.183 Loading library shm_lib@master 00:00:04.183 Library shm_lib@master is cached. Copying from home. 00:00:04.201 [Pipeline] node 00:00:19.203 Still waiting to schedule task 00:00:19.203 Waiting for next available executor on ‘vagrant-vm-host’ 00:06:51.853 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:06:51.856 [Pipeline] { 00:06:51.868 [Pipeline] catchError 00:06:51.870 [Pipeline] { 00:06:51.887 [Pipeline] wrap 00:06:51.896 [Pipeline] { 00:06:51.906 [Pipeline] stage 00:06:51.908 [Pipeline] { (Prologue) 00:06:51.928 [Pipeline] echo 00:06:51.930 Node: VM-host-SM0 00:06:51.937 [Pipeline] cleanWs 00:06:51.947 [WS-CLEANUP] Deleting project workspace... 00:06:51.947 [WS-CLEANUP] Deferred wipeout is used... 00:06:51.953 [WS-CLEANUP] done 00:06:52.156 [Pipeline] setCustomBuildProperty 00:06:52.235 [Pipeline] httpRequest 00:06:52.641 [Pipeline] echo 00:06:52.643 Sorcerer 10.211.164.101 is alive 00:06:52.654 [Pipeline] retry 00:06:52.656 [Pipeline] { 00:06:52.670 [Pipeline] httpRequest 00:06:52.675 HttpMethod: GET 00:06:52.675 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:06:52.676 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:06:52.677 Response Code: HTTP/1.1 200 OK 00:06:52.678 Success: Status code 200 is in the accepted range: 200,404 00:06:52.679 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:06:52.832 [Pipeline] } 00:06:52.849 [Pipeline] // retry 00:06:52.857 [Pipeline] sh 00:06:53.137 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:06:53.153 [Pipeline] httpRequest 00:06:53.552 [Pipeline] echo 00:06:53.554 Sorcerer 10.211.164.101 is alive 00:06:53.564 [Pipeline] retry 00:06:53.566 [Pipeline] { 00:06:53.581 [Pipeline] httpRequest 00:06:53.586 HttpMethod: GET 00:06:53.587 URL: http://10.211.164.101/packages/spdk_361e7dfef3f0f30efb2dc66a6066e6ca068bb096.tar.gz 00:06:53.587 Sending request to url: http://10.211.164.101/packages/spdk_361e7dfef3f0f30efb2dc66a6066e6ca068bb096.tar.gz 00:06:53.588 Response Code: HTTP/1.1 200 OK 00:06:53.588 Success: Status code 200 is in the accepted range: 200,404 00:06:53.589 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_361e7dfef3f0f30efb2dc66a6066e6ca068bb096.tar.gz 00:06:55.863 [Pipeline] } 00:06:55.881 [Pipeline] // retry 00:06:55.889 [Pipeline] sh 00:06:56.168 + tar --no-same-owner -xf spdk_361e7dfef3f0f30efb2dc66a6066e6ca068bb096.tar.gz 00:06:59.463 [Pipeline] sh 00:06:59.742 + git -C spdk log --oneline -n5 00:06:59.742 361e7dfef accel/mlx5: More precise condition to update DB 00:06:59.742 6d05ff4c4 lib/thread: Add API to register a post poller handler 00:06:59.742 78b0a6b78 nvme/rdma: Support accel sequence 00:06:59.742 6e713f9c6 lib/rdma_provider: Add API to check if accel seq supported 00:06:59.742 477ec7110 lib/mlx5: Add API to check if UMR registration supported 00:06:59.762 [Pipeline] writeFile 00:06:59.778 [Pipeline] sh 00:07:00.065 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:00.080 [Pipeline] sh 00:07:00.351 + cat autorun-spdk.conf 00:07:00.351 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:00.351 SPDK_RUN_ASAN=1 00:07:00.351 SPDK_RUN_UBSAN=1 00:07:00.351 SPDK_TEST_RAID=1 00:07:00.351 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:00.357 RUN_NIGHTLY=0 00:07:00.359 [Pipeline] } 00:07:00.374 [Pipeline] // stage 00:07:00.391 [Pipeline] stage 00:07:00.393 [Pipeline] { (Run VM) 00:07:00.406 [Pipeline] sh 00:07:00.698 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:00.698 + echo 'Start stage prepare_nvme.sh' 00:07:00.698 Start stage prepare_nvme.sh 00:07:00.698 + [[ -n 6 ]] 00:07:00.698 + disk_prefix=ex6 00:07:00.698 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:07:00.698 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:07:00.698 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:07:00.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:00.698 ++ SPDK_RUN_ASAN=1 00:07:00.698 ++ SPDK_RUN_UBSAN=1 00:07:00.698 ++ SPDK_TEST_RAID=1 00:07:00.698 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:00.698 ++ RUN_NIGHTLY=0 00:07:00.698 + cd /var/jenkins/workspace/raid-vg-autotest 00:07:00.698 + nvme_files=() 00:07:00.698 + declare -A nvme_files 00:07:00.698 + backend_dir=/var/lib/libvirt/images/backends 00:07:00.698 + nvme_files['nvme.img']=5G 00:07:00.698 + nvme_files['nvme-cmb.img']=5G 00:07:00.698 + nvme_files['nvme-multi0.img']=4G 00:07:00.698 + nvme_files['nvme-multi1.img']=4G 00:07:00.698 + nvme_files['nvme-multi2.img']=4G 00:07:00.698 + nvme_files['nvme-openstack.img']=8G 00:07:00.698 + nvme_files['nvme-zns.img']=5G 00:07:00.698 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:00.698 + (( SPDK_TEST_FTL == 1 )) 00:07:00.698 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:00.698 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:00.698 + for nvme in "${!nvme_files[@]}" 00:07:00.698 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:07:00.698 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:00.698 + for nvme in "${!nvme_files[@]}" 00:07:00.698 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:07:01.268 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:01.268 + for nvme in "${!nvme_files[@]}" 00:07:01.268 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:07:01.268 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:01.268 + for nvme in "${!nvme_files[@]}" 00:07:01.268 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:07:01.526 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:01.526 + for nvme in "${!nvme_files[@]}" 00:07:01.526 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:07:01.526 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:01.526 + for nvme in "${!nvme_files[@]}" 00:07:01.526 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:07:01.526 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:01.526 + for nvme in "${!nvme_files[@]}" 00:07:01.526 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:07:02.092 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:02.093 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:07:02.093 + echo 'End stage prepare_nvme.sh' 00:07:02.093 End stage prepare_nvme.sh 00:07:02.105 [Pipeline] sh 00:07:02.391 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:02.391 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:07:02.391 00:07:02.391 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:07:02.391 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:07:02.391 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:07:02.391 HELP=0 00:07:02.391 DRY_RUN=0 00:07:02.391 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:07:02.391 NVME_DISKS_TYPE=nvme,nvme, 00:07:02.391 NVME_AUTO_CREATE=0 00:07:02.391 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:07:02.391 NVME_CMB=,, 00:07:02.391 NVME_PMR=,, 00:07:02.391 NVME_ZNS=,, 00:07:02.391 NVME_MS=,, 00:07:02.391 NVME_FDP=,, 00:07:02.391 SPDK_VAGRANT_DISTRO=fedora39 00:07:02.391 SPDK_VAGRANT_VMCPU=10 00:07:02.391 SPDK_VAGRANT_VMRAM=12288 00:07:02.391 SPDK_VAGRANT_PROVIDER=libvirt 00:07:02.391 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:02.391 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:02.391 SPDK_OPENSTACK_NETWORK=0 00:07:02.391 VAGRANT_PACKAGE_BOX=0 00:07:02.391 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:02.391 FORCE_DISTRO=true 00:07:02.391 VAGRANT_BOX_VERSION= 00:07:02.391 EXTRA_VAGRANTFILES= 00:07:02.391 NIC_MODEL=e1000 00:07:02.391 00:07:02.391 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:07:02.391 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:07:05.672 Bringing machine 'default' up with 'libvirt' provider... 00:07:06.633 ==> default: Creating image (snapshot of base box volume). 00:07:06.633 ==> default: Creating domain with the following settings... 00:07:06.633 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730731175_9ce2283a94abe8815db4 00:07:06.633 ==> default: -- Domain type: kvm 00:07:06.633 ==> default: -- Cpus: 10 00:07:06.633 ==> default: -- Feature: acpi 00:07:06.633 ==> default: -- Feature: apic 00:07:06.633 ==> default: -- Feature: pae 00:07:06.633 ==> default: -- Memory: 12288M 00:07:06.633 ==> default: -- Memory Backing: hugepages: 00:07:06.633 ==> default: -- Management MAC: 00:07:06.633 ==> default: -- Loader: 00:07:06.633 ==> default: -- Nvram: 00:07:06.633 ==> default: -- Base box: spdk/fedora39 00:07:06.633 ==> default: -- Storage pool: default 00:07:06.633 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730731175_9ce2283a94abe8815db4.img (20G) 00:07:06.633 ==> default: -- Volume Cache: default 00:07:06.633 ==> default: -- Kernel: 00:07:06.633 ==> default: -- Initrd: 00:07:06.633 ==> default: -- Graphics Type: vnc 00:07:06.633 ==> default: -- Graphics Port: -1 00:07:06.633 ==> default: -- Graphics IP: 127.0.0.1 00:07:06.633 ==> default: -- Graphics Password: Not defined 00:07:06.633 ==> default: -- Video Type: cirrus 00:07:06.633 ==> default: -- Video VRAM: 9216 00:07:06.633 ==> default: -- Sound Type: 00:07:06.633 ==> default: -- Keymap: en-us 00:07:06.633 ==> default: -- TPM Path: 00:07:06.633 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:06.633 ==> default: -- Command line args: 00:07:06.633 ==> default: -> value=-device, 00:07:06.633 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:06.633 ==> default: -> value=-drive, 00:07:06.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:07:06.633 ==> default: -> value=-device, 00:07:06.633 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:06.633 ==> default: -> value=-device, 00:07:06.633 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:06.633 ==> default: -> value=-drive, 00:07:06.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:06.633 ==> default: -> value=-device, 00:07:06.633 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:06.633 ==> default: -> value=-drive, 00:07:06.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:06.633 ==> default: -> value=-device, 00:07:06.633 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:06.633 ==> default: -> value=-drive, 00:07:06.633 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:06.633 ==> default: -> value=-device, 00:07:06.633 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:06.891 ==> default: Creating shared folders metadata... 00:07:07.148 ==> default: Starting domain. 00:07:09.050 ==> default: Waiting for domain to get an IP address... 00:07:27.130 ==> default: Waiting for SSH to become available... 00:07:27.130 ==> default: Configuring and enabling network interfaces... 00:07:31.317 default: SSH address: 192.168.121.118:22 00:07:31.317 default: SSH username: vagrant 00:07:31.317 default: SSH auth method: private key 00:07:33.220 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:41.340 ==> default: Mounting SSHFS shared folder... 00:07:42.275 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:07:42.275 ==> default: Checking Mount.. 00:07:43.649 ==> default: Folder Successfully Mounted! 00:07:43.649 ==> default: Running provisioner: file... 00:07:44.583 default: ~/.gitconfig => .gitconfig 00:07:44.841 00:07:44.841 SUCCESS! 00:07:44.841 00:07:44.841 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:07:44.841 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:44.841 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:07:44.841 00:07:44.850 [Pipeline] } 00:07:44.866 [Pipeline] // stage 00:07:44.877 [Pipeline] dir 00:07:44.877 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:07:44.879 [Pipeline] { 00:07:44.891 [Pipeline] catchError 00:07:44.893 [Pipeline] { 00:07:44.906 [Pipeline] sh 00:07:45.185 + vagrant ssh-config --host vagrant 00:07:45.185 + sed -ne /^Host/,$p 00:07:45.185 + tee ssh_conf 00:07:49.370 Host vagrant 00:07:49.370 HostName 192.168.121.118 00:07:49.370 User vagrant 00:07:49.370 Port 22 00:07:49.370 UserKnownHostsFile /dev/null 00:07:49.370 StrictHostKeyChecking no 00:07:49.370 PasswordAuthentication no 00:07:49.370 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:07:49.370 IdentitiesOnly yes 00:07:49.370 LogLevel FATAL 00:07:49.370 ForwardAgent yes 00:07:49.370 ForwardX11 yes 00:07:49.370 00:07:49.383 [Pipeline] withEnv 00:07:49.385 [Pipeline] { 00:07:49.397 [Pipeline] sh 00:07:49.679 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:49.679 source /etc/os-release 00:07:49.679 [[ -e /image.version ]] && img=$(< /image.version) 00:07:49.679 # Minimal, systemd-like check. 00:07:49.679 if [[ -e /.dockerenv ]]; then 00:07:49.679 # Clear garbage from the node's name: 00:07:49.679 # agt-er_autotest_547-896 -> autotest_547-896 00:07:49.679 # $HOSTNAME is the actual container id 00:07:49.679 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:49.679 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:07:49.679 # We can assume this is a mount from a host where container is running, 00:07:49.679 # so fetch its hostname to easily identify the target swarm worker. 00:07:49.679 container="$(< /etc/hostname) ($agent)" 00:07:49.679 else 00:07:49.679 # Fallback 00:07:49.679 container=$agent 00:07:49.679 fi 00:07:49.679 fi 00:07:49.679 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:49.679 00:07:49.945 [Pipeline] } 00:07:49.957 [Pipeline] // withEnv 00:07:49.963 [Pipeline] setCustomBuildProperty 00:07:49.973 [Pipeline] stage 00:07:49.975 [Pipeline] { (Tests) 00:07:49.988 [Pipeline] sh 00:07:50.264 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:50.538 [Pipeline] sh 00:07:50.819 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:07:51.127 [Pipeline] timeout 00:07:51.127 Timeout set to expire in 1 hr 30 min 00:07:51.129 [Pipeline] { 00:07:51.144 [Pipeline] sh 00:07:51.423 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:51.989 HEAD is now at 361e7dfef accel/mlx5: More precise condition to update DB 00:07:52.009 [Pipeline] sh 00:07:52.290 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:52.561 [Pipeline] sh 00:07:52.839 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:53.123 [Pipeline] sh 00:07:53.466 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:07:53.724 ++ readlink -f spdk_repo 00:07:53.724 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:53.724 + [[ -n /home/vagrant/spdk_repo ]] 00:07:53.724 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:53.724 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:53.724 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:53.724 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:53.724 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:53.724 + [[ raid-vg-autotest == pkgdep-* ]] 00:07:53.724 + cd /home/vagrant/spdk_repo 00:07:53.724 + source /etc/os-release 00:07:53.724 ++ NAME='Fedora Linux' 00:07:53.724 ++ VERSION='39 (Cloud Edition)' 00:07:53.724 ++ ID=fedora 00:07:53.724 ++ VERSION_ID=39 00:07:53.724 ++ VERSION_CODENAME= 00:07:53.724 ++ PLATFORM_ID=platform:f39 00:07:53.724 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:53.724 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:53.724 ++ LOGO=fedora-logo-icon 00:07:53.724 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:53.724 ++ HOME_URL=https://fedoraproject.org/ 00:07:53.724 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:53.724 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:53.724 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:53.724 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:53.724 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:53.724 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:53.724 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:53.724 ++ SUPPORT_END=2024-11-12 00:07:53.724 ++ VARIANT='Cloud Edition' 00:07:53.724 ++ VARIANT_ID=cloud 00:07:53.724 + uname -a 00:07:53.724 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:53.724 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:53.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.239 Hugepages 00:07:54.239 node hugesize free / total 00:07:54.239 node0 1048576kB 0 / 0 00:07:54.239 node0 2048kB 0 / 0 00:07:54.239 00:07:54.239 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:54.239 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:54.239 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:54.239 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:54.239 + rm -f /tmp/spdk-ld-path 00:07:54.239 + source autorun-spdk.conf 00:07:54.239 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:54.239 ++ SPDK_RUN_ASAN=1 00:07:54.239 ++ SPDK_RUN_UBSAN=1 00:07:54.239 ++ SPDK_TEST_RAID=1 00:07:54.239 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:54.239 ++ RUN_NIGHTLY=0 00:07:54.239 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:54.239 + [[ -n '' ]] 00:07:54.239 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:54.239 + for M in /var/spdk/build-*-manifest.txt 00:07:54.239 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:54.239 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:54.239 + for M in /var/spdk/build-*-manifest.txt 00:07:54.239 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:54.239 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:54.239 + for M in /var/spdk/build-*-manifest.txt 00:07:54.239 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:54.239 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:54.239 ++ uname 00:07:54.239 + [[ Linux == \L\i\n\u\x ]] 00:07:54.239 + sudo dmesg -T 00:07:54.239 + sudo dmesg --clear 00:07:54.239 + dmesg_pid=5265 00:07:54.239 + sudo dmesg -Tw 00:07:54.239 + [[ Fedora Linux == FreeBSD ]] 00:07:54.239 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:54.239 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:54.239 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:54.239 + [[ -x /usr/src/fio-static/fio ]] 00:07:54.239 + export FIO_BIN=/usr/src/fio-static/fio 00:07:54.239 + FIO_BIN=/usr/src/fio-static/fio 00:07:54.239 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:54.239 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:54.239 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:54.239 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:54.239 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:54.239 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:54.239 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:54.239 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:54.239 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:54.497 14:40:24 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:07:54.497 14:40:24 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:54.497 14:40:24 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:54.497 14:40:24 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:07:54.498 14:40:24 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:07:54.498 14:40:24 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:07:54.498 14:40:24 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:54.498 14:40:24 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:07:54.498 14:40:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:54.498 14:40:24 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:54.498 14:40:24 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:07:54.498 14:40:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.498 14:40:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:54.498 14:40:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:54.498 14:40:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.498 14:40:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.498 14:40:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.498 14:40:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.498 14:40:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.498 14:40:24 -- paths/export.sh@5 -- $ export PATH 00:07:54.498 14:40:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.498 14:40:24 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:54.498 14:40:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:07:54.498 14:40:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730731224.XXXXXX 00:07:54.498 14:40:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730731224.95tDw7 00:07:54.498 14:40:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:07:54.498 14:40:24 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:07:54.498 14:40:24 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:07:54.498 14:40:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:54.498 14:40:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:54.498 14:40:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:07:54.498 14:40:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:07:54.498 14:40:24 -- common/autotest_common.sh@10 -- $ set +x 00:07:54.498 14:40:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:07:54.498 14:40:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:07:54.498 14:40:24 -- pm/common@17 -- $ local monitor 00:07:54.498 14:40:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:54.498 14:40:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:54.498 14:40:24 -- pm/common@25 -- $ sleep 1 00:07:54.498 14:40:24 -- pm/common@21 -- $ date +%s 00:07:54.498 14:40:24 -- pm/common@21 -- $ date +%s 00:07:54.498 14:40:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730731224 00:07:54.498 14:40:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730731224 00:07:54.498 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730731224_collect-cpu-load.pm.log 00:07:54.498 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730731224_collect-vmstat.pm.log 00:07:55.433 14:40:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:07:55.433 14:40:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:55.433 14:40:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:55.433 14:40:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:55.433 14:40:25 -- spdk/autobuild.sh@16 -- $ date -u 00:07:55.433 Mon Nov 4 02:40:25 PM UTC 2024 00:07:55.433 14:40:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:55.433 v25.01-pre-172-g361e7dfef 00:07:55.433 14:40:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:07:55.433 14:40:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:07:55.433 14:40:25 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:07:55.433 14:40:25 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:55.433 14:40:25 -- common/autotest_common.sh@10 -- $ set +x 00:07:55.433 ************************************ 00:07:55.433 START TEST asan 00:07:55.433 ************************************ 00:07:55.433 using asan 00:07:55.433 14:40:25 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:07:55.433 00:07:55.433 real 0m0.000s 00:07:55.433 user 0m0.000s 00:07:55.433 sys 0m0.000s 00:07:55.433 14:40:25 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:55.433 ************************************ 00:07:55.433 END TEST asan 00:07:55.433 ************************************ 00:07:55.433 14:40:25 asan -- common/autotest_common.sh@10 -- $ set +x 00:07:55.695 14:40:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:55.695 14:40:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:55.695 14:40:25 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:07:55.695 14:40:25 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:55.695 14:40:25 -- common/autotest_common.sh@10 -- $ set +x 00:07:55.695 ************************************ 00:07:55.695 START TEST ubsan 00:07:55.695 ************************************ 00:07:55.695 using ubsan 00:07:55.695 14:40:25 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:07:55.695 00:07:55.696 real 0m0.000s 00:07:55.696 user 0m0.000s 00:07:55.696 sys 0m0.000s 00:07:55.696 14:40:25 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:55.696 ************************************ 00:07:55.696 END TEST ubsan 00:07:55.696 ************************************ 00:07:55.696 14:40:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:55.696 14:40:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:55.696 14:40:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:55.696 14:40:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:55.696 14:40:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:55.696 14:40:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:55.696 14:40:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:55.696 14:40:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:55.696 14:40:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:55.696 14:40:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:07:55.696 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:55.696 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:56.264 Using 'verbs' RDMA provider 00:08:12.100 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:24.337 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:24.337 Creating mk/config.mk...done. 00:08:24.337 Creating mk/cc.flags.mk...done. 00:08:24.337 Type 'make' to build. 00:08:24.337 14:40:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:08:24.337 14:40:53 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:08:24.337 14:40:53 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:08:24.337 14:40:53 -- common/autotest_common.sh@10 -- $ set +x 00:08:24.337 ************************************ 00:08:24.337 START TEST make 00:08:24.337 ************************************ 00:08:24.337 14:40:53 make -- common/autotest_common.sh@1127 -- $ make -j10 00:08:24.337 make[1]: Nothing to be done for 'all'. 00:08:36.537 The Meson build system 00:08:36.537 Version: 1.5.0 00:08:36.537 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:08:36.537 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:08:36.537 Build type: native build 00:08:36.537 Program cat found: YES (/usr/bin/cat) 00:08:36.537 Project name: DPDK 00:08:36.537 Project version: 24.03.0 00:08:36.537 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:36.537 C linker for the host machine: cc ld.bfd 2.40-14 00:08:36.537 Host machine cpu family: x86_64 00:08:36.537 Host machine cpu: x86_64 00:08:36.537 Message: ## Building in Developer Mode ## 00:08:36.537 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:36.537 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:08:36.537 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:36.537 Program python3 found: YES (/usr/bin/python3) 00:08:36.537 Program cat found: YES (/usr/bin/cat) 00:08:36.537 Compiler for C supports arguments -march=native: YES 00:08:36.537 Checking for size of "void *" : 8 00:08:36.537 Checking for size of "void *" : 8 (cached) 00:08:36.537 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:36.537 Library m found: YES 00:08:36.537 Library numa found: YES 00:08:36.537 Has header "numaif.h" : YES 00:08:36.537 Library fdt found: NO 00:08:36.537 Library execinfo found: NO 00:08:36.537 Has header "execinfo.h" : YES 00:08:36.537 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:36.537 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:36.537 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:36.537 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:36.537 Run-time dependency openssl found: YES 3.1.1 00:08:36.537 Run-time dependency libpcap found: YES 1.10.4 00:08:36.537 Has header "pcap.h" with dependency libpcap: YES 00:08:36.537 Compiler for C supports arguments -Wcast-qual: YES 00:08:36.537 Compiler for C supports arguments -Wdeprecated: YES 00:08:36.537 Compiler for C supports arguments -Wformat: YES 00:08:36.537 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:36.537 Compiler for C supports arguments -Wformat-security: NO 00:08:36.537 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:36.537 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:36.537 Compiler for C supports arguments -Wnested-externs: YES 00:08:36.537 Compiler for C supports arguments -Wold-style-definition: YES 00:08:36.537 Compiler for C supports arguments -Wpointer-arith: YES 00:08:36.537 Compiler for C supports arguments -Wsign-compare: YES 00:08:36.537 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:36.537 Compiler for C supports arguments -Wundef: YES 00:08:36.537 Compiler for C supports arguments -Wwrite-strings: YES 00:08:36.537 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:36.537 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:36.537 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:36.537 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:36.537 Program objdump found: YES (/usr/bin/objdump) 00:08:36.537 Compiler for C supports arguments -mavx512f: YES 00:08:36.537 Checking if "AVX512 checking" compiles: YES 00:08:36.537 Fetching value of define "__SSE4_2__" : 1 00:08:36.537 Fetching value of define "__AES__" : 1 00:08:36.537 Fetching value of define "__AVX__" : 1 00:08:36.537 Fetching value of define "__AVX2__" : 1 00:08:36.537 Fetching value of define "__AVX512BW__" : (undefined) 00:08:36.537 Fetching value of define "__AVX512CD__" : (undefined) 00:08:36.537 Fetching value of define "__AVX512DQ__" : (undefined) 00:08:36.537 Fetching value of define "__AVX512F__" : (undefined) 00:08:36.537 Fetching value of define "__AVX512VL__" : (undefined) 00:08:36.537 Fetching value of define "__PCLMUL__" : 1 00:08:36.537 Fetching value of define "__RDRND__" : 1 00:08:36.537 Fetching value of define "__RDSEED__" : 1 00:08:36.537 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:36.537 Fetching value of define "__znver1__" : (undefined) 00:08:36.537 Fetching value of define "__znver2__" : (undefined) 00:08:36.537 Fetching value of define "__znver3__" : (undefined) 00:08:36.537 Fetching value of define "__znver4__" : (undefined) 00:08:36.537 Library asan found: YES 00:08:36.537 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:36.537 Message: lib/log: Defining dependency "log" 00:08:36.537 Message: lib/kvargs: Defining dependency "kvargs" 00:08:36.537 Message: lib/telemetry: Defining dependency "telemetry" 00:08:36.537 Library rt found: YES 00:08:36.537 Checking for function "getentropy" : NO 00:08:36.537 Message: lib/eal: Defining dependency "eal" 00:08:36.537 Message: lib/ring: Defining dependency "ring" 00:08:36.537 Message: lib/rcu: Defining dependency "rcu" 00:08:36.537 Message: lib/mempool: Defining dependency "mempool" 00:08:36.537 Message: lib/mbuf: Defining dependency "mbuf" 00:08:36.537 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:36.537 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:36.537 Compiler for C supports arguments -mpclmul: YES 00:08:36.537 Compiler for C supports arguments -maes: YES 00:08:36.537 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:36.537 Compiler for C supports arguments -mavx512bw: YES 00:08:36.537 Compiler for C supports arguments -mavx512dq: YES 00:08:36.537 Compiler for C supports arguments -mavx512vl: YES 00:08:36.537 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:36.537 Compiler for C supports arguments -mavx2: YES 00:08:36.537 Compiler for C supports arguments -mavx: YES 00:08:36.537 Message: lib/net: Defining dependency "net" 00:08:36.537 Message: lib/meter: Defining dependency "meter" 00:08:36.537 Message: lib/ethdev: Defining dependency "ethdev" 00:08:36.537 Message: lib/pci: Defining dependency "pci" 00:08:36.537 Message: lib/cmdline: Defining dependency "cmdline" 00:08:36.537 Message: lib/hash: Defining dependency "hash" 00:08:36.537 Message: lib/timer: Defining dependency "timer" 00:08:36.537 Message: lib/compressdev: Defining dependency "compressdev" 00:08:36.537 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:36.537 Message: lib/dmadev: Defining dependency "dmadev" 00:08:36.537 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:36.537 Message: lib/power: Defining dependency "power" 00:08:36.537 Message: lib/reorder: Defining dependency "reorder" 00:08:36.537 Message: lib/security: Defining dependency "security" 00:08:36.537 Has header "linux/userfaultfd.h" : YES 00:08:36.537 Has header "linux/vduse.h" : YES 00:08:36.537 Message: lib/vhost: Defining dependency "vhost" 00:08:36.537 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:36.537 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:36.537 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:36.537 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:36.537 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:36.537 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:36.537 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:36.537 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:36.537 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:36.537 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:36.537 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:36.537 Configuring doxy-api-html.conf using configuration 00:08:36.537 Configuring doxy-api-man.conf using configuration 00:08:36.537 Program mandb found: YES (/usr/bin/mandb) 00:08:36.537 Program sphinx-build found: NO 00:08:36.537 Configuring rte_build_config.h using configuration 00:08:36.537 Message: 00:08:36.537 ================= 00:08:36.537 Applications Enabled 00:08:36.537 ================= 00:08:36.537 00:08:36.537 apps: 00:08:36.537 00:08:36.537 00:08:36.537 Message: 00:08:36.537 ================= 00:08:36.537 Libraries Enabled 00:08:36.537 ================= 00:08:36.537 00:08:36.537 libs: 00:08:36.537 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:36.537 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:36.537 cryptodev, dmadev, power, reorder, security, vhost, 00:08:36.537 00:08:36.537 Message: 00:08:36.537 =============== 00:08:36.537 Drivers Enabled 00:08:36.537 =============== 00:08:36.537 00:08:36.537 common: 00:08:36.537 00:08:36.537 bus: 00:08:36.537 pci, vdev, 00:08:36.537 mempool: 00:08:36.537 ring, 00:08:36.537 dma: 00:08:36.537 00:08:36.537 net: 00:08:36.537 00:08:36.537 crypto: 00:08:36.537 00:08:36.537 compress: 00:08:36.537 00:08:36.537 vdpa: 00:08:36.537 00:08:36.537 00:08:36.537 Message: 00:08:36.537 ================= 00:08:36.537 Content Skipped 00:08:36.537 ================= 00:08:36.537 00:08:36.537 apps: 00:08:36.537 dumpcap: explicitly disabled via build config 00:08:36.537 graph: explicitly disabled via build config 00:08:36.537 pdump: explicitly disabled via build config 00:08:36.537 proc-info: explicitly disabled via build config 00:08:36.537 test-acl: explicitly disabled via build config 00:08:36.537 test-bbdev: explicitly disabled via build config 00:08:36.537 test-cmdline: explicitly disabled via build config 00:08:36.537 test-compress-perf: explicitly disabled via build config 00:08:36.537 test-crypto-perf: explicitly disabled via build config 00:08:36.537 test-dma-perf: explicitly disabled via build config 00:08:36.537 test-eventdev: explicitly disabled via build config 00:08:36.537 test-fib: explicitly disabled via build config 00:08:36.537 test-flow-perf: explicitly disabled via build config 00:08:36.537 test-gpudev: explicitly disabled via build config 00:08:36.537 test-mldev: explicitly disabled via build config 00:08:36.537 test-pipeline: explicitly disabled via build config 00:08:36.537 test-pmd: explicitly disabled via build config 00:08:36.537 test-regex: explicitly disabled via build config 00:08:36.537 test-sad: explicitly disabled via build config 00:08:36.538 test-security-perf: explicitly disabled via build config 00:08:36.538 00:08:36.538 libs: 00:08:36.538 argparse: explicitly disabled via build config 00:08:36.538 metrics: explicitly disabled via build config 00:08:36.538 acl: explicitly disabled via build config 00:08:36.538 bbdev: explicitly disabled via build config 00:08:36.538 bitratestats: explicitly disabled via build config 00:08:36.538 bpf: explicitly disabled via build config 00:08:36.538 cfgfile: explicitly disabled via build config 00:08:36.538 distributor: explicitly disabled via build config 00:08:36.538 efd: explicitly disabled via build config 00:08:36.538 eventdev: explicitly disabled via build config 00:08:36.538 dispatcher: explicitly disabled via build config 00:08:36.538 gpudev: explicitly disabled via build config 00:08:36.538 gro: explicitly disabled via build config 00:08:36.538 gso: explicitly disabled via build config 00:08:36.538 ip_frag: explicitly disabled via build config 00:08:36.538 jobstats: explicitly disabled via build config 00:08:36.538 latencystats: explicitly disabled via build config 00:08:36.538 lpm: explicitly disabled via build config 00:08:36.538 member: explicitly disabled via build config 00:08:36.538 pcapng: explicitly disabled via build config 00:08:36.538 rawdev: explicitly disabled via build config 00:08:36.538 regexdev: explicitly disabled via build config 00:08:36.538 mldev: explicitly disabled via build config 00:08:36.538 rib: explicitly disabled via build config 00:08:36.538 sched: explicitly disabled via build config 00:08:36.538 stack: explicitly disabled via build config 00:08:36.538 ipsec: explicitly disabled via build config 00:08:36.538 pdcp: explicitly disabled via build config 00:08:36.538 fib: explicitly disabled via build config 00:08:36.538 port: explicitly disabled via build config 00:08:36.538 pdump: explicitly disabled via build config 00:08:36.538 table: explicitly disabled via build config 00:08:36.538 pipeline: explicitly disabled via build config 00:08:36.538 graph: explicitly disabled via build config 00:08:36.538 node: explicitly disabled via build config 00:08:36.538 00:08:36.538 drivers: 00:08:36.538 common/cpt: not in enabled drivers build config 00:08:36.538 common/dpaax: not in enabled drivers build config 00:08:36.538 common/iavf: not in enabled drivers build config 00:08:36.538 common/idpf: not in enabled drivers build config 00:08:36.538 common/ionic: not in enabled drivers build config 00:08:36.538 common/mvep: not in enabled drivers build config 00:08:36.538 common/octeontx: not in enabled drivers build config 00:08:36.538 bus/auxiliary: not in enabled drivers build config 00:08:36.538 bus/cdx: not in enabled drivers build config 00:08:36.538 bus/dpaa: not in enabled drivers build config 00:08:36.538 bus/fslmc: not in enabled drivers build config 00:08:36.538 bus/ifpga: not in enabled drivers build config 00:08:36.538 bus/platform: not in enabled drivers build config 00:08:36.538 bus/uacce: not in enabled drivers build config 00:08:36.538 bus/vmbus: not in enabled drivers build config 00:08:36.538 common/cnxk: not in enabled drivers build config 00:08:36.538 common/mlx5: not in enabled drivers build config 00:08:36.538 common/nfp: not in enabled drivers build config 00:08:36.538 common/nitrox: not in enabled drivers build config 00:08:36.538 common/qat: not in enabled drivers build config 00:08:36.538 common/sfc_efx: not in enabled drivers build config 00:08:36.538 mempool/bucket: not in enabled drivers build config 00:08:36.538 mempool/cnxk: not in enabled drivers build config 00:08:36.538 mempool/dpaa: not in enabled drivers build config 00:08:36.538 mempool/dpaa2: not in enabled drivers build config 00:08:36.538 mempool/octeontx: not in enabled drivers build config 00:08:36.538 mempool/stack: not in enabled drivers build config 00:08:36.538 dma/cnxk: not in enabled drivers build config 00:08:36.538 dma/dpaa: not in enabled drivers build config 00:08:36.538 dma/dpaa2: not in enabled drivers build config 00:08:36.538 dma/hisilicon: not in enabled drivers build config 00:08:36.538 dma/idxd: not in enabled drivers build config 00:08:36.538 dma/ioat: not in enabled drivers build config 00:08:36.538 dma/skeleton: not in enabled drivers build config 00:08:36.538 net/af_packet: not in enabled drivers build config 00:08:36.538 net/af_xdp: not in enabled drivers build config 00:08:36.538 net/ark: not in enabled drivers build config 00:08:36.538 net/atlantic: not in enabled drivers build config 00:08:36.538 net/avp: not in enabled drivers build config 00:08:36.538 net/axgbe: not in enabled drivers build config 00:08:36.538 net/bnx2x: not in enabled drivers build config 00:08:36.538 net/bnxt: not in enabled drivers build config 00:08:36.538 net/bonding: not in enabled drivers build config 00:08:36.538 net/cnxk: not in enabled drivers build config 00:08:36.538 net/cpfl: not in enabled drivers build config 00:08:36.538 net/cxgbe: not in enabled drivers build config 00:08:36.538 net/dpaa: not in enabled drivers build config 00:08:36.538 net/dpaa2: not in enabled drivers build config 00:08:36.538 net/e1000: not in enabled drivers build config 00:08:36.538 net/ena: not in enabled drivers build config 00:08:36.538 net/enetc: not in enabled drivers build config 00:08:36.538 net/enetfec: not in enabled drivers build config 00:08:36.538 net/enic: not in enabled drivers build config 00:08:36.538 net/failsafe: not in enabled drivers build config 00:08:36.538 net/fm10k: not in enabled drivers build config 00:08:36.538 net/gve: not in enabled drivers build config 00:08:36.538 net/hinic: not in enabled drivers build config 00:08:36.538 net/hns3: not in enabled drivers build config 00:08:36.538 net/i40e: not in enabled drivers build config 00:08:36.538 net/iavf: not in enabled drivers build config 00:08:36.538 net/ice: not in enabled drivers build config 00:08:36.538 net/idpf: not in enabled drivers build config 00:08:36.538 net/igc: not in enabled drivers build config 00:08:36.538 net/ionic: not in enabled drivers build config 00:08:36.538 net/ipn3ke: not in enabled drivers build config 00:08:36.538 net/ixgbe: not in enabled drivers build config 00:08:36.538 net/mana: not in enabled drivers build config 00:08:36.538 net/memif: not in enabled drivers build config 00:08:36.538 net/mlx4: not in enabled drivers build config 00:08:36.538 net/mlx5: not in enabled drivers build config 00:08:36.538 net/mvneta: not in enabled drivers build config 00:08:36.538 net/mvpp2: not in enabled drivers build config 00:08:36.538 net/netvsc: not in enabled drivers build config 00:08:36.538 net/nfb: not in enabled drivers build config 00:08:36.538 net/nfp: not in enabled drivers build config 00:08:36.538 net/ngbe: not in enabled drivers build config 00:08:36.538 net/null: not in enabled drivers build config 00:08:36.538 net/octeontx: not in enabled drivers build config 00:08:36.538 net/octeon_ep: not in enabled drivers build config 00:08:36.538 net/pcap: not in enabled drivers build config 00:08:36.538 net/pfe: not in enabled drivers build config 00:08:36.538 net/qede: not in enabled drivers build config 00:08:36.538 net/ring: not in enabled drivers build config 00:08:36.538 net/sfc: not in enabled drivers build config 00:08:36.538 net/softnic: not in enabled drivers build config 00:08:36.538 net/tap: not in enabled drivers build config 00:08:36.538 net/thunderx: not in enabled drivers build config 00:08:36.538 net/txgbe: not in enabled drivers build config 00:08:36.538 net/vdev_netvsc: not in enabled drivers build config 00:08:36.538 net/vhost: not in enabled drivers build config 00:08:36.538 net/virtio: not in enabled drivers build config 00:08:36.538 net/vmxnet3: not in enabled drivers build config 00:08:36.538 raw/*: missing internal dependency, "rawdev" 00:08:36.538 crypto/armv8: not in enabled drivers build config 00:08:36.538 crypto/bcmfs: not in enabled drivers build config 00:08:36.538 crypto/caam_jr: not in enabled drivers build config 00:08:36.538 crypto/ccp: not in enabled drivers build config 00:08:36.538 crypto/cnxk: not in enabled drivers build config 00:08:36.538 crypto/dpaa_sec: not in enabled drivers build config 00:08:36.538 crypto/dpaa2_sec: not in enabled drivers build config 00:08:36.538 crypto/ipsec_mb: not in enabled drivers build config 00:08:36.538 crypto/mlx5: not in enabled drivers build config 00:08:36.538 crypto/mvsam: not in enabled drivers build config 00:08:36.538 crypto/nitrox: not in enabled drivers build config 00:08:36.538 crypto/null: not in enabled drivers build config 00:08:36.538 crypto/octeontx: not in enabled drivers build config 00:08:36.538 crypto/openssl: not in enabled drivers build config 00:08:36.538 crypto/scheduler: not in enabled drivers build config 00:08:36.538 crypto/uadk: not in enabled drivers build config 00:08:36.538 crypto/virtio: not in enabled drivers build config 00:08:36.538 compress/isal: not in enabled drivers build config 00:08:36.538 compress/mlx5: not in enabled drivers build config 00:08:36.538 compress/nitrox: not in enabled drivers build config 00:08:36.538 compress/octeontx: not in enabled drivers build config 00:08:36.538 compress/zlib: not in enabled drivers build config 00:08:36.538 regex/*: missing internal dependency, "regexdev" 00:08:36.538 ml/*: missing internal dependency, "mldev" 00:08:36.538 vdpa/ifc: not in enabled drivers build config 00:08:36.538 vdpa/mlx5: not in enabled drivers build config 00:08:36.538 vdpa/nfp: not in enabled drivers build config 00:08:36.538 vdpa/sfc: not in enabled drivers build config 00:08:36.538 event/*: missing internal dependency, "eventdev" 00:08:36.538 baseband/*: missing internal dependency, "bbdev" 00:08:36.538 gpu/*: missing internal dependency, "gpudev" 00:08:36.538 00:08:36.538 00:08:36.538 Build targets in project: 85 00:08:36.538 00:08:36.538 DPDK 24.03.0 00:08:36.538 00:08:36.538 User defined options 00:08:36.538 buildtype : debug 00:08:36.538 default_library : shared 00:08:36.538 libdir : lib 00:08:36.538 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:36.538 b_sanitize : address 00:08:36.538 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:36.538 c_link_args : 00:08:36.538 cpu_instruction_set: native 00:08:36.538 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:08:36.538 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:08:36.538 enable_docs : false 00:08:36.538 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:36.538 enable_kmods : false 00:08:36.538 max_lcores : 128 00:08:36.538 tests : false 00:08:36.538 00:08:36.539 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:36.797 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:08:36.797 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:36.797 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:36.797 [3/268] Linking static target lib/librte_kvargs.a 00:08:36.797 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:36.797 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:36.797 [6/268] Linking static target lib/librte_log.a 00:08:37.364 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.364 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:37.364 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:37.364 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:37.623 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:37.623 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:37.881 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:37.881 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:37.881 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:37.881 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.881 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:37.881 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:37.881 [19/268] Linking static target lib/librte_telemetry.a 00:08:37.881 [20/268] Linking target lib/librte_log.so.24.1 00:08:38.448 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:38.448 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:38.448 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:38.448 [24/268] Linking target lib/librte_kvargs.so.24.1 00:08:38.448 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:38.448 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:38.706 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:38.706 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:38.706 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:38.706 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:38.706 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:38.963 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.963 [33/268] Linking target lib/librte_telemetry.so.24.1 00:08:38.963 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:39.221 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:39.221 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:39.221 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:39.478 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:39.478 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:39.478 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:39.478 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:39.478 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:39.736 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:39.736 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:39.736 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:39.992 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:39.992 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:40.249 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:40.506 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:40.506 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:40.506 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:40.506 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:40.506 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:40.764 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:40.764 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:41.020 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:41.020 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:41.020 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:41.020 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:41.277 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:41.277 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:41.277 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:41.535 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:41.535 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:41.535 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:41.792 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:42.050 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:42.050 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:42.308 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:42.308 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:42.566 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:42.566 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:42.566 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:42.566 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:42.566 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:42.566 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:42.566 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:42.825 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:42.825 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:43.083 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:43.083 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:43.083 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:43.083 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:43.360 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:43.360 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:43.360 [86/268] Linking static target lib/librte_ring.a 00:08:43.360 [87/268] Linking static target lib/librte_eal.a 00:08:43.617 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:43.617 [89/268] Linking static target lib/librte_rcu.a 00:08:43.617 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:43.873 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:43.873 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.873 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:43.873 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:43.873 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:44.131 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:44.131 [97/268] Linking static target lib/librte_mempool.a 00:08:44.131 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.131 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:44.388 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:44.649 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:44.649 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:44.649 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:44.649 [104/268] Linking static target lib/librte_mbuf.a 00:08:44.649 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:44.907 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:44.907 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:44.907 [108/268] Linking static target lib/librte_meter.a 00:08:44.907 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:45.167 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:45.167 [111/268] Linking static target lib/librte_net.a 00:08:45.167 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.423 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:45.423 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:45.423 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.423 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:45.679 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.679 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.937 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:46.195 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:46.452 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:46.452 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:46.452 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:46.710 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:46.968 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:46.968 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:46.968 [127/268] Linking static target lib/librte_pci.a 00:08:46.968 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:46.968 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:47.227 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:47.227 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:47.227 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:47.227 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.227 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:47.484 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:47.484 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:47.484 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:47.484 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:47.484 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:47.484 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:47.484 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:47.484 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:47.484 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:47.484 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:47.741 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:48.003 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:48.003 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:48.260 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:48.260 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:48.260 [150/268] Linking static target lib/librte_cmdline.a 00:08:48.260 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:48.520 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:48.779 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:48.779 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:48.779 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:48.779 [156/268] Linking static target lib/librte_timer.a 00:08:48.779 [157/268] Linking static target lib/librte_ethdev.a 00:08:49.038 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:49.038 [159/268] Linking static target lib/librte_hash.a 00:08:49.038 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:49.038 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:49.296 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:49.296 [163/268] Linking static target lib/librte_compressdev.a 00:08:49.296 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:49.296 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.555 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:49.555 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:49.813 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:49.813 [169/268] Linking static target lib/librte_dmadev.a 00:08:49.813 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:50.071 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:50.071 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:50.071 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:50.330 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:50.330 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:50.330 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:50.588 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:50.588 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:50.588 [179/268] Linking static target lib/librte_cryptodev.a 00:08:50.588 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:50.588 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:50.847 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:51.105 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:51.105 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:51.105 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:51.105 [186/268] Linking static target lib/librte_power.a 00:08:51.363 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:51.363 [188/268] Linking static target lib/librte_reorder.a 00:08:51.626 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:51.626 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:51.886 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:51.886 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:51.886 [193/268] Linking static target lib/librte_security.a 00:08:52.143 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.417 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.417 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:52.690 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.690 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:52.948 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:53.205 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:53.205 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:53.206 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:53.206 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:53.464 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:53.464 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:53.722 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:53.981 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:53.981 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:53.981 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:53.981 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:53.981 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:54.239 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:54.239 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:54.239 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:54.239 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:54.239 [216/268] Linking static target drivers/librte_bus_pci.a 00:08:54.497 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:54.497 [218/268] Linking static target drivers/librte_bus_vdev.a 00:08:54.497 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:54.497 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:54.497 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:54.755 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:54.755 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:54.755 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:54.755 [225/268] Linking static target drivers/librte_mempool_ring.a 00:08:54.755 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.012 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.947 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:55.947 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:55.947 [230/268] Linking target lib/librte_eal.so.24.1 00:08:56.205 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:56.205 [232/268] Linking target lib/librte_dmadev.so.24.1 00:08:56.205 [233/268] Linking target lib/librte_ring.so.24.1 00:08:56.205 [234/268] Linking target lib/librte_timer.so.24.1 00:08:56.205 [235/268] Linking target lib/librte_meter.so.24.1 00:08:56.205 [236/268] Linking target lib/librte_pci.so.24.1 00:08:56.205 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:56.464 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:56.464 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:56.464 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:56.464 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:56.464 [242/268] Linking target lib/librte_rcu.so.24.1 00:08:56.464 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:56.464 [244/268] Linking target lib/librte_mempool.so.24.1 00:08:56.464 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:56.464 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:56.464 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:56.722 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:56.722 [249/268] Linking target lib/librte_mbuf.so.24.1 00:08:56.722 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:56.980 [251/268] Linking target lib/librte_reorder.so.24.1 00:08:56.980 [252/268] Linking target lib/librte_net.so.24.1 00:08:56.980 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:08:56.980 [254/268] Linking target lib/librte_compressdev.so.24.1 00:08:56.980 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:56.980 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:56.980 [257/268] Linking target lib/librte_cmdline.so.24.1 00:08:56.980 [258/268] Linking target lib/librte_hash.so.24.1 00:08:56.980 [259/268] Linking target lib/librte_security.so.24.1 00:08:56.980 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:57.237 [261/268] Linking target lib/librte_ethdev.so.24.1 00:08:57.237 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:57.237 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:57.503 [264/268] Linking target lib/librte_power.so.24.1 00:09:00.031 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:00.031 [266/268] Linking static target lib/librte_vhost.a 00:09:01.404 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.404 [268/268] Linking target lib/librte_vhost.so.24.1 00:09:01.404 INFO: autodetecting backend as ninja 00:09:01.404 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:23.378 CC lib/ut_mock/mock.o 00:09:23.378 CC lib/log/log.o 00:09:23.378 CC lib/log/log_flags.o 00:09:23.378 CC lib/ut/ut.o 00:09:23.378 CC lib/log/log_deprecated.o 00:09:23.378 LIB libspdk_ut.a 00:09:23.378 LIB libspdk_ut_mock.a 00:09:23.378 LIB libspdk_log.a 00:09:23.378 SO libspdk_ut.so.2.0 00:09:23.378 SO libspdk_ut_mock.so.6.0 00:09:23.378 SO libspdk_log.so.7.1 00:09:23.378 SYMLINK libspdk_ut.so 00:09:23.378 SYMLINK libspdk_ut_mock.so 00:09:23.378 SYMLINK libspdk_log.so 00:09:23.378 CC lib/dma/dma.o 00:09:23.378 CC lib/ioat/ioat.o 00:09:23.378 CC lib/util/base64.o 00:09:23.378 CC lib/util/crc16.o 00:09:23.378 CC lib/util/cpuset.o 00:09:23.378 CC lib/util/bit_array.o 00:09:23.378 CC lib/util/crc32c.o 00:09:23.378 CC lib/util/crc32.o 00:09:23.378 CXX lib/trace_parser/trace.o 00:09:23.378 CC lib/vfio_user/host/vfio_user_pci.o 00:09:23.378 CC lib/util/crc32_ieee.o 00:09:23.378 CC lib/util/crc64.o 00:09:23.378 CC lib/util/dif.o 00:09:23.378 CC lib/util/fd.o 00:09:23.378 LIB libspdk_dma.a 00:09:23.378 CC lib/util/fd_group.o 00:09:23.379 CC lib/util/file.o 00:09:23.379 SO libspdk_dma.so.5.0 00:09:23.379 CC lib/vfio_user/host/vfio_user.o 00:09:23.379 CC lib/util/hexlify.o 00:09:23.379 SYMLINK libspdk_dma.so 00:09:23.379 CC lib/util/iov.o 00:09:23.379 LIB libspdk_ioat.a 00:09:23.379 SO libspdk_ioat.so.7.0 00:09:23.379 CC lib/util/math.o 00:09:23.379 CC lib/util/net.o 00:09:23.379 SYMLINK libspdk_ioat.so 00:09:23.379 CC lib/util/pipe.o 00:09:23.379 CC lib/util/strerror_tls.o 00:09:23.379 CC lib/util/string.o 00:09:23.379 CC lib/util/uuid.o 00:09:23.379 LIB libspdk_vfio_user.a 00:09:23.379 CC lib/util/xor.o 00:09:23.379 SO libspdk_vfio_user.so.5.0 00:09:23.379 CC lib/util/zipf.o 00:09:23.379 CC lib/util/md5.o 00:09:23.379 SYMLINK libspdk_vfio_user.so 00:09:23.637 LIB libspdk_util.a 00:09:23.637 SO libspdk_util.so.10.1 00:09:23.895 SYMLINK libspdk_util.so 00:09:23.895 LIB libspdk_trace_parser.a 00:09:23.895 SO libspdk_trace_parser.so.6.0 00:09:23.895 SYMLINK libspdk_trace_parser.so 00:09:23.895 CC lib/idxd/idxd.o 00:09:23.895 CC lib/idxd/idxd_user.o 00:09:23.895 CC lib/idxd/idxd_kernel.o 00:09:24.153 CC lib/conf/conf.o 00:09:24.153 CC lib/vmd/vmd.o 00:09:24.153 CC lib/vmd/led.o 00:09:24.153 CC lib/env_dpdk/env.o 00:09:24.153 CC lib/env_dpdk/memory.o 00:09:24.153 CC lib/json/json_parse.o 00:09:24.153 CC lib/rdma_utils/rdma_utils.o 00:09:24.153 CC lib/env_dpdk/pci.o 00:09:24.153 CC lib/env_dpdk/init.o 00:09:24.438 CC lib/json/json_util.o 00:09:24.438 CC lib/json/json_write.o 00:09:24.438 LIB libspdk_rdma_utils.a 00:09:24.438 LIB libspdk_conf.a 00:09:24.438 SO libspdk_rdma_utils.so.1.0 00:09:24.438 SO libspdk_conf.so.6.0 00:09:24.438 SYMLINK libspdk_rdma_utils.so 00:09:24.438 SYMLINK libspdk_conf.so 00:09:24.438 CC lib/env_dpdk/threads.o 00:09:24.697 CC lib/env_dpdk/pci_ioat.o 00:09:24.697 CC lib/env_dpdk/pci_virtio.o 00:09:24.697 CC lib/env_dpdk/pci_vmd.o 00:09:24.697 CC lib/rdma_provider/common.o 00:09:24.697 CC lib/env_dpdk/pci_idxd.o 00:09:24.697 LIB libspdk_json.a 00:09:24.697 SO libspdk_json.so.6.0 00:09:24.697 CC lib/env_dpdk/pci_event.o 00:09:24.697 CC lib/env_dpdk/sigbus_handler.o 00:09:24.697 SYMLINK libspdk_json.so 00:09:24.697 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:24.956 LIB libspdk_idxd.a 00:09:24.956 CC lib/env_dpdk/pci_dpdk.o 00:09:24.956 SO libspdk_idxd.so.12.1 00:09:24.956 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:24.956 LIB libspdk_vmd.a 00:09:24.956 SO libspdk_vmd.so.6.0 00:09:24.956 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:24.956 SYMLINK libspdk_idxd.so 00:09:24.956 CC lib/jsonrpc/jsonrpc_server.o 00:09:24.956 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:24.956 CC lib/jsonrpc/jsonrpc_client.o 00:09:24.956 SYMLINK libspdk_vmd.so 00:09:24.956 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:24.956 LIB libspdk_rdma_provider.a 00:09:25.214 SO libspdk_rdma_provider.so.7.0 00:09:25.214 SYMLINK libspdk_rdma_provider.so 00:09:25.214 LIB libspdk_jsonrpc.a 00:09:25.472 SO libspdk_jsonrpc.so.6.0 00:09:25.472 SYMLINK libspdk_jsonrpc.so 00:09:25.731 CC lib/rpc/rpc.o 00:09:25.989 LIB libspdk_env_dpdk.a 00:09:25.989 LIB libspdk_rpc.a 00:09:25.989 SO libspdk_env_dpdk.so.15.1 00:09:25.989 SO libspdk_rpc.so.6.0 00:09:25.989 SYMLINK libspdk_rpc.so 00:09:26.246 SYMLINK libspdk_env_dpdk.so 00:09:26.246 CC lib/trace/trace.o 00:09:26.246 CC lib/trace/trace_flags.o 00:09:26.246 CC lib/trace/trace_rpc.o 00:09:26.246 CC lib/notify/notify.o 00:09:26.246 CC lib/notify/notify_rpc.o 00:09:26.246 CC lib/keyring/keyring.o 00:09:26.246 CC lib/keyring/keyring_rpc.o 00:09:26.527 LIB libspdk_notify.a 00:09:26.527 SO libspdk_notify.so.6.0 00:09:26.527 LIB libspdk_trace.a 00:09:26.527 SYMLINK libspdk_notify.so 00:09:26.527 LIB libspdk_keyring.a 00:09:26.784 SO libspdk_trace.so.11.0 00:09:26.785 SO libspdk_keyring.so.2.0 00:09:26.785 SYMLINK libspdk_trace.so 00:09:26.785 SYMLINK libspdk_keyring.so 00:09:27.043 CC lib/thread/iobuf.o 00:09:27.043 CC lib/thread/thread.o 00:09:27.043 CC lib/sock/sock.o 00:09:27.043 CC lib/sock/sock_rpc.o 00:09:27.610 LIB libspdk_sock.a 00:09:27.610 SO libspdk_sock.so.10.0 00:09:27.610 SYMLINK libspdk_sock.so 00:09:27.868 CC lib/nvme/nvme_fabric.o 00:09:27.868 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:27.868 CC lib/nvme/nvme_ctrlr.o 00:09:27.868 CC lib/nvme/nvme_ns_cmd.o 00:09:27.868 CC lib/nvme/nvme_qpair.o 00:09:27.868 CC lib/nvme/nvme_ns.o 00:09:27.868 CC lib/nvme/nvme_pcie.o 00:09:27.868 CC lib/nvme/nvme_pcie_common.o 00:09:27.868 CC lib/nvme/nvme.o 00:09:28.869 CC lib/nvme/nvme_quirks.o 00:09:28.869 CC lib/nvme/nvme_transport.o 00:09:28.869 CC lib/nvme/nvme_discovery.o 00:09:28.869 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:28.869 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:29.127 CC lib/nvme/nvme_tcp.o 00:09:29.127 LIB libspdk_thread.a 00:09:29.127 CC lib/nvme/nvme_opal.o 00:09:29.127 SO libspdk_thread.so.11.0 00:09:29.127 SYMLINK libspdk_thread.so 00:09:29.127 CC lib/nvme/nvme_io_msg.o 00:09:29.385 CC lib/nvme/nvme_poll_group.o 00:09:29.385 CC lib/nvme/nvme_zns.o 00:09:29.643 CC lib/nvme/nvme_stubs.o 00:09:29.643 CC lib/nvme/nvme_auth.o 00:09:29.643 CC lib/accel/accel.o 00:09:29.643 CC lib/accel/accel_rpc.o 00:09:29.899 CC lib/nvme/nvme_cuse.o 00:09:29.899 CC lib/nvme/nvme_rdma.o 00:09:29.899 CC lib/accel/accel_sw.o 00:09:30.156 CC lib/blob/blobstore.o 00:09:30.156 CC lib/init/json_config.o 00:09:30.415 CC lib/virtio/virtio.o 00:09:30.415 CC lib/init/subsystem.o 00:09:30.673 CC lib/init/subsystem_rpc.o 00:09:30.673 CC lib/init/rpc.o 00:09:30.673 CC lib/blob/request.o 00:09:30.973 CC lib/virtio/virtio_vhost_user.o 00:09:30.973 CC lib/virtio/virtio_vfio_user.o 00:09:30.973 CC lib/blob/zeroes.o 00:09:30.973 LIB libspdk_init.a 00:09:30.973 SO libspdk_init.so.6.0 00:09:30.973 CC lib/virtio/virtio_pci.o 00:09:30.973 CC lib/fsdev/fsdev.o 00:09:30.973 SYMLINK libspdk_init.so 00:09:30.973 CC lib/blob/blob_bs_dev.o 00:09:31.252 CC lib/fsdev/fsdev_io.o 00:09:31.252 CC lib/fsdev/fsdev_rpc.o 00:09:31.252 LIB libspdk_accel.a 00:09:31.252 CC lib/event/app.o 00:09:31.252 CC lib/event/reactor.o 00:09:31.252 SO libspdk_accel.so.16.0 00:09:31.252 CC lib/event/log_rpc.o 00:09:31.252 CC lib/event/app_rpc.o 00:09:31.252 SYMLINK libspdk_accel.so 00:09:31.252 LIB libspdk_virtio.a 00:09:31.510 SO libspdk_virtio.so.7.0 00:09:31.510 SYMLINK libspdk_virtio.so 00:09:31.510 CC lib/event/scheduler_static.o 00:09:31.510 CC lib/bdev/bdev.o 00:09:31.510 CC lib/bdev/bdev_rpc.o 00:09:31.510 CC lib/bdev/bdev_zone.o 00:09:31.768 CC lib/bdev/part.o 00:09:31.768 CC lib/bdev/scsi_nvme.o 00:09:31.768 LIB libspdk_nvme.a 00:09:31.768 LIB libspdk_event.a 00:09:32.026 SO libspdk_event.so.14.0 00:09:32.026 SYMLINK libspdk_event.so 00:09:32.026 LIB libspdk_fsdev.a 00:09:32.026 SO libspdk_nvme.so.15.0 00:09:32.026 SO libspdk_fsdev.so.2.0 00:09:32.308 SYMLINK libspdk_fsdev.so 00:09:32.308 SYMLINK libspdk_nvme.so 00:09:32.578 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:33.144 LIB libspdk_fuse_dispatcher.a 00:09:33.402 SO libspdk_fuse_dispatcher.so.1.0 00:09:33.403 SYMLINK libspdk_fuse_dispatcher.so 00:09:34.777 LIB libspdk_blob.a 00:09:34.777 SO libspdk_blob.so.11.0 00:09:34.777 SYMLINK libspdk_blob.so 00:09:35.038 CC lib/blobfs/tree.o 00:09:35.038 CC lib/blobfs/blobfs.o 00:09:35.038 CC lib/lvol/lvol.o 00:09:35.298 LIB libspdk_bdev.a 00:09:35.298 SO libspdk_bdev.so.17.0 00:09:35.556 SYMLINK libspdk_bdev.so 00:09:35.556 CC lib/nvmf/ctrlr.o 00:09:35.556 CC lib/nvmf/ctrlr_bdev.o 00:09:35.556 CC lib/nvmf/ctrlr_discovery.o 00:09:35.556 CC lib/scsi/dev.o 00:09:35.556 CC lib/scsi/lun.o 00:09:35.556 CC lib/nbd/nbd.o 00:09:35.556 CC lib/ftl/ftl_core.o 00:09:35.814 CC lib/ublk/ublk.o 00:09:36.072 CC lib/ublk/ublk_rpc.o 00:09:36.072 CC lib/scsi/port.o 00:09:36.072 CC lib/ftl/ftl_init.o 00:09:36.330 CC lib/nbd/nbd_rpc.o 00:09:36.330 CC lib/ftl/ftl_layout.o 00:09:36.330 CC lib/scsi/scsi.o 00:09:36.330 CC lib/nvmf/subsystem.o 00:09:36.330 LIB libspdk_blobfs.a 00:09:36.330 SO libspdk_blobfs.so.10.0 00:09:36.330 LIB libspdk_nbd.a 00:09:36.330 LIB libspdk_lvol.a 00:09:36.330 CC lib/scsi/scsi_bdev.o 00:09:36.330 CC lib/ftl/ftl_debug.o 00:09:36.588 SO libspdk_lvol.so.10.0 00:09:36.588 SO libspdk_nbd.so.7.0 00:09:36.588 SYMLINK libspdk_blobfs.so 00:09:36.588 CC lib/nvmf/nvmf.o 00:09:36.588 SYMLINK libspdk_lvol.so 00:09:36.588 SYMLINK libspdk_nbd.so 00:09:36.588 CC lib/ftl/ftl_io.o 00:09:36.588 CC lib/nvmf/nvmf_rpc.o 00:09:36.588 LIB libspdk_ublk.a 00:09:36.588 CC lib/nvmf/transport.o 00:09:36.588 CC lib/nvmf/tcp.o 00:09:36.588 SO libspdk_ublk.so.3.0 00:09:36.588 CC lib/nvmf/stubs.o 00:09:36.848 SYMLINK libspdk_ublk.so 00:09:36.848 CC lib/scsi/scsi_pr.o 00:09:36.848 CC lib/ftl/ftl_sb.o 00:09:37.104 CC lib/ftl/ftl_l2p.o 00:09:37.104 CC lib/scsi/scsi_rpc.o 00:09:37.104 CC lib/ftl/ftl_l2p_flat.o 00:09:37.363 CC lib/scsi/task.o 00:09:37.363 CC lib/ftl/ftl_nv_cache.o 00:09:37.363 CC lib/nvmf/mdns_server.o 00:09:37.363 CC lib/nvmf/rdma.o 00:09:37.620 LIB libspdk_scsi.a 00:09:37.620 CC lib/nvmf/auth.o 00:09:37.620 SO libspdk_scsi.so.9.0 00:09:37.620 CC lib/ftl/ftl_band.o 00:09:37.620 CC lib/ftl/ftl_band_ops.o 00:09:37.620 SYMLINK libspdk_scsi.so 00:09:37.620 CC lib/ftl/ftl_writer.o 00:09:37.878 CC lib/ftl/ftl_rq.o 00:09:37.878 CC lib/ftl/ftl_reloc.o 00:09:37.878 CC lib/ftl/ftl_l2p_cache.o 00:09:38.136 CC lib/ftl/ftl_p2l.o 00:09:38.136 CC lib/ftl/ftl_p2l_log.o 00:09:38.136 CC lib/ftl/mngt/ftl_mngt.o 00:09:38.394 CC lib/iscsi/conn.o 00:09:38.394 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:38.394 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:38.394 CC lib/iscsi/init_grp.o 00:09:38.653 CC lib/iscsi/iscsi.o 00:09:38.653 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:38.653 CC lib/vhost/vhost.o 00:09:38.653 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:38.653 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:38.653 CC lib/iscsi/param.o 00:09:38.653 CC lib/iscsi/portal_grp.o 00:09:38.911 CC lib/iscsi/tgt_node.o 00:09:38.911 CC lib/iscsi/iscsi_subsystem.o 00:09:38.911 CC lib/iscsi/iscsi_rpc.o 00:09:39.169 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:39.169 CC lib/iscsi/task.o 00:09:39.169 CC lib/vhost/vhost_rpc.o 00:09:39.169 CC lib/vhost/vhost_scsi.o 00:09:39.427 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:39.427 CC lib/vhost/vhost_blk.o 00:09:39.427 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:39.427 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:39.427 CC lib/vhost/rte_vhost_user.o 00:09:39.427 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:39.686 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:39.686 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:39.686 CC lib/ftl/utils/ftl_conf.o 00:09:39.686 CC lib/ftl/utils/ftl_md.o 00:09:39.944 CC lib/ftl/utils/ftl_mempool.o 00:09:39.944 CC lib/ftl/utils/ftl_bitmap.o 00:09:39.945 CC lib/ftl/utils/ftl_property.o 00:09:40.204 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:40.204 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:40.204 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:40.204 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:40.204 LIB libspdk_nvmf.a 00:09:40.204 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:40.462 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:40.462 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:40.462 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:40.462 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:40.462 SO libspdk_nvmf.so.20.0 00:09:40.462 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:40.462 LIB libspdk_iscsi.a 00:09:40.462 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:40.462 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:40.462 CC lib/ftl/base/ftl_base_dev.o 00:09:40.720 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:40.720 SO libspdk_iscsi.so.8.0 00:09:40.720 CC lib/ftl/base/ftl_base_bdev.o 00:09:40.720 CC lib/ftl/ftl_trace.o 00:09:40.720 SYMLINK libspdk_nvmf.so 00:09:40.720 SYMLINK libspdk_iscsi.so 00:09:40.720 LIB libspdk_vhost.a 00:09:40.978 SO libspdk_vhost.so.8.0 00:09:40.978 LIB libspdk_ftl.a 00:09:40.978 SYMLINK libspdk_vhost.so 00:09:41.237 SO libspdk_ftl.so.9.0 00:09:41.495 SYMLINK libspdk_ftl.so 00:09:42.060 CC module/env_dpdk/env_dpdk_rpc.o 00:09:42.060 CC module/keyring/linux/keyring.o 00:09:42.060 CC module/scheduler/gscheduler/gscheduler.o 00:09:42.060 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:42.060 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:42.060 CC module/keyring/file/keyring.o 00:09:42.060 CC module/sock/posix/posix.o 00:09:42.060 CC module/fsdev/aio/fsdev_aio.o 00:09:42.060 CC module/blob/bdev/blob_bdev.o 00:09:42.060 CC module/accel/error/accel_error.o 00:09:42.060 LIB libspdk_env_dpdk_rpc.a 00:09:42.060 SO libspdk_env_dpdk_rpc.so.6.0 00:09:42.060 SYMLINK libspdk_env_dpdk_rpc.so 00:09:42.060 CC module/accel/error/accel_error_rpc.o 00:09:42.060 CC module/keyring/linux/keyring_rpc.o 00:09:42.060 LIB libspdk_scheduler_dpdk_governor.a 00:09:42.060 LIB libspdk_scheduler_gscheduler.a 00:09:42.060 CC module/keyring/file/keyring_rpc.o 00:09:42.318 SO libspdk_scheduler_gscheduler.so.4.0 00:09:42.318 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:42.318 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:42.318 SYMLINK libspdk_scheduler_gscheduler.so 00:09:42.318 LIB libspdk_scheduler_dynamic.a 00:09:42.318 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:42.318 CC module/fsdev/aio/linux_aio_mgr.o 00:09:42.318 SO libspdk_scheduler_dynamic.so.4.0 00:09:42.318 LIB libspdk_keyring_linux.a 00:09:42.318 LIB libspdk_accel_error.a 00:09:42.318 SO libspdk_keyring_linux.so.1.0 00:09:42.318 SO libspdk_accel_error.so.2.0 00:09:42.318 LIB libspdk_blob_bdev.a 00:09:42.318 SYMLINK libspdk_scheduler_dynamic.so 00:09:42.318 SO libspdk_blob_bdev.so.11.0 00:09:42.318 LIB libspdk_keyring_file.a 00:09:42.318 SYMLINK libspdk_keyring_linux.so 00:09:42.318 SYMLINK libspdk_accel_error.so 00:09:42.318 SYMLINK libspdk_blob_bdev.so 00:09:42.318 SO libspdk_keyring_file.so.2.0 00:09:42.318 CC module/accel/ioat/accel_ioat.o 00:09:42.318 CC module/accel/ioat/accel_ioat_rpc.o 00:09:42.576 SYMLINK libspdk_keyring_file.so 00:09:42.576 CC module/accel/dsa/accel_dsa.o 00:09:42.576 CC module/accel/iaa/accel_iaa.o 00:09:42.576 CC module/accel/iaa/accel_iaa_rpc.o 00:09:42.576 LIB libspdk_accel_ioat.a 00:09:42.576 CC module/bdev/gpt/gpt.o 00:09:42.576 SO libspdk_accel_ioat.so.6.0 00:09:42.836 CC module/blobfs/bdev/blobfs_bdev.o 00:09:42.836 CC module/bdev/delay/vbdev_delay.o 00:09:42.836 CC module/bdev/error/vbdev_error.o 00:09:42.836 SYMLINK libspdk_accel_ioat.so 00:09:42.836 CC module/accel/dsa/accel_dsa_rpc.o 00:09:42.836 CC module/bdev/gpt/vbdev_gpt.o 00:09:42.836 LIB libspdk_accel_iaa.a 00:09:42.836 SO libspdk_accel_iaa.so.3.0 00:09:42.836 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:42.836 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:42.836 LIB libspdk_fsdev_aio.a 00:09:43.094 LIB libspdk_accel_dsa.a 00:09:43.094 SO libspdk_fsdev_aio.so.1.0 00:09:43.094 SO libspdk_accel_dsa.so.5.0 00:09:43.094 SYMLINK libspdk_accel_iaa.so 00:09:43.094 CC module/bdev/error/vbdev_error_rpc.o 00:09:43.094 SYMLINK libspdk_fsdev_aio.so 00:09:43.094 SYMLINK libspdk_accel_dsa.so 00:09:43.094 LIB libspdk_sock_posix.a 00:09:43.094 CC module/bdev/lvol/vbdev_lvol.o 00:09:43.094 LIB libspdk_blobfs_bdev.a 00:09:43.094 SO libspdk_sock_posix.so.6.0 00:09:43.094 LIB libspdk_bdev_delay.a 00:09:43.094 SO libspdk_blobfs_bdev.so.6.0 00:09:43.094 SO libspdk_bdev_delay.so.6.0 00:09:43.094 CC module/bdev/malloc/bdev_malloc.o 00:09:43.094 LIB libspdk_bdev_gpt.a 00:09:43.352 LIB libspdk_bdev_error.a 00:09:43.352 SO libspdk_bdev_gpt.so.6.0 00:09:43.352 SYMLINK libspdk_sock_posix.so 00:09:43.352 SYMLINK libspdk_blobfs_bdev.so 00:09:43.352 SO libspdk_bdev_error.so.6.0 00:09:43.352 CC module/bdev/null/bdev_null.o 00:09:43.352 CC module/bdev/nvme/bdev_nvme.o 00:09:43.352 SYMLINK libspdk_bdev_delay.so 00:09:43.352 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:43.352 CC module/bdev/passthru/vbdev_passthru.o 00:09:43.352 SYMLINK libspdk_bdev_gpt.so 00:09:43.352 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:43.352 SYMLINK libspdk_bdev_error.so 00:09:43.352 CC module/bdev/nvme/nvme_rpc.o 00:09:43.352 CC module/bdev/raid/bdev_raid.o 00:09:43.352 CC module/bdev/split/vbdev_split.o 00:09:43.610 CC module/bdev/nvme/bdev_mdns_client.o 00:09:43.610 CC module/bdev/null/bdev_null_rpc.o 00:09:43.610 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:43.610 LIB libspdk_bdev_passthru.a 00:09:43.868 CC module/bdev/split/vbdev_split_rpc.o 00:09:43.868 SO libspdk_bdev_passthru.so.6.0 00:09:43.868 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:43.868 LIB libspdk_bdev_null.a 00:09:43.868 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:43.868 SYMLINK libspdk_bdev_passthru.so 00:09:43.868 SO libspdk_bdev_null.so.6.0 00:09:43.868 CC module/bdev/aio/bdev_aio.o 00:09:43.868 LIB libspdk_bdev_malloc.a 00:09:43.868 SYMLINK libspdk_bdev_null.so 00:09:43.868 LIB libspdk_bdev_split.a 00:09:43.868 SO libspdk_bdev_malloc.so.6.0 00:09:43.868 SO libspdk_bdev_split.so.6.0 00:09:44.125 SYMLINK libspdk_bdev_malloc.so 00:09:44.125 CC module/bdev/ftl/bdev_ftl.o 00:09:44.125 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:44.125 SYMLINK libspdk_bdev_split.so 00:09:44.125 CC module/bdev/raid/bdev_raid_rpc.o 00:09:44.125 CC module/bdev/raid/bdev_raid_sb.o 00:09:44.125 CC module/bdev/iscsi/bdev_iscsi.o 00:09:44.384 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:44.384 LIB libspdk_bdev_lvol.a 00:09:44.384 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:44.384 CC module/bdev/aio/bdev_aio_rpc.o 00:09:44.384 CC module/bdev/nvme/vbdev_opal.o 00:09:44.384 SO libspdk_bdev_lvol.so.6.0 00:09:44.384 LIB libspdk_bdev_ftl.a 00:09:44.384 SO libspdk_bdev_ftl.so.6.0 00:09:44.384 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:44.384 SYMLINK libspdk_bdev_lvol.so 00:09:44.384 LIB libspdk_bdev_zone_block.a 00:09:44.642 SYMLINK libspdk_bdev_ftl.so 00:09:44.642 CC module/bdev/raid/raid0.o 00:09:44.642 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:44.642 SO libspdk_bdev_zone_block.so.6.0 00:09:44.642 LIB libspdk_bdev_aio.a 00:09:44.642 LIB libspdk_bdev_iscsi.a 00:09:44.642 SO libspdk_bdev_aio.so.6.0 00:09:44.642 SYMLINK libspdk_bdev_zone_block.so 00:09:44.642 CC module/bdev/raid/raid1.o 00:09:44.642 SO libspdk_bdev_iscsi.so.6.0 00:09:44.642 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:44.642 SYMLINK libspdk_bdev_aio.so 00:09:44.642 CC module/bdev/raid/concat.o 00:09:44.642 SYMLINK libspdk_bdev_iscsi.so 00:09:44.642 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:44.642 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:44.642 CC module/bdev/raid/raid5f.o 00:09:45.207 LIB libspdk_bdev_virtio.a 00:09:45.465 SO libspdk_bdev_virtio.so.6.0 00:09:45.465 LIB libspdk_bdev_raid.a 00:09:45.465 SYMLINK libspdk_bdev_virtio.so 00:09:45.465 SO libspdk_bdev_raid.so.6.0 00:09:45.465 SYMLINK libspdk_bdev_raid.so 00:09:46.839 LIB libspdk_bdev_nvme.a 00:09:46.839 SO libspdk_bdev_nvme.so.7.1 00:09:46.839 SYMLINK libspdk_bdev_nvme.so 00:09:47.406 CC module/event/subsystems/sock/sock.o 00:09:47.406 CC module/event/subsystems/scheduler/scheduler.o 00:09:47.406 CC module/event/subsystems/keyring/keyring.o 00:09:47.406 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:47.406 CC module/event/subsystems/iobuf/iobuf.o 00:09:47.406 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:47.406 CC module/event/subsystems/fsdev/fsdev.o 00:09:47.406 CC module/event/subsystems/vmd/vmd.o 00:09:47.406 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:47.716 LIB libspdk_event_fsdev.a 00:09:47.716 LIB libspdk_event_keyring.a 00:09:47.716 LIB libspdk_event_sock.a 00:09:47.716 LIB libspdk_event_vhost_blk.a 00:09:47.716 SO libspdk_event_fsdev.so.1.0 00:09:47.716 LIB libspdk_event_vmd.a 00:09:47.716 LIB libspdk_event_iobuf.a 00:09:47.716 SO libspdk_event_sock.so.5.0 00:09:47.716 SO libspdk_event_keyring.so.1.0 00:09:47.716 LIB libspdk_event_scheduler.a 00:09:47.716 SO libspdk_event_vhost_blk.so.3.0 00:09:47.716 SO libspdk_event_vmd.so.6.0 00:09:47.716 SO libspdk_event_iobuf.so.3.0 00:09:47.716 SO libspdk_event_scheduler.so.4.0 00:09:47.716 SYMLINK libspdk_event_fsdev.so 00:09:47.716 SYMLINK libspdk_event_sock.so 00:09:47.716 SYMLINK libspdk_event_keyring.so 00:09:47.716 SYMLINK libspdk_event_vhost_blk.so 00:09:47.716 SYMLINK libspdk_event_iobuf.so 00:09:47.716 SYMLINK libspdk_event_vmd.so 00:09:47.716 SYMLINK libspdk_event_scheduler.so 00:09:47.976 CC module/event/subsystems/accel/accel.o 00:09:48.234 LIB libspdk_event_accel.a 00:09:48.234 SO libspdk_event_accel.so.6.0 00:09:48.234 SYMLINK libspdk_event_accel.so 00:09:48.492 CC module/event/subsystems/bdev/bdev.o 00:09:48.749 LIB libspdk_event_bdev.a 00:09:48.749 SO libspdk_event_bdev.so.6.0 00:09:49.011 SYMLINK libspdk_event_bdev.so 00:09:49.011 CC module/event/subsystems/nbd/nbd.o 00:09:49.011 CC module/event/subsystems/scsi/scsi.o 00:09:49.011 CC module/event/subsystems/ublk/ublk.o 00:09:49.011 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:49.011 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:49.270 LIB libspdk_event_nbd.a 00:09:49.270 LIB libspdk_event_ublk.a 00:09:49.270 LIB libspdk_event_scsi.a 00:09:49.270 SO libspdk_event_nbd.so.6.0 00:09:49.270 SO libspdk_event_ublk.so.3.0 00:09:49.270 SO libspdk_event_scsi.so.6.0 00:09:49.270 SYMLINK libspdk_event_nbd.so 00:09:49.270 SYMLINK libspdk_event_ublk.so 00:09:49.529 SYMLINK libspdk_event_scsi.so 00:09:49.529 LIB libspdk_event_nvmf.a 00:09:49.529 SO libspdk_event_nvmf.so.6.0 00:09:49.529 SYMLINK libspdk_event_nvmf.so 00:09:49.529 CC module/event/subsystems/iscsi/iscsi.o 00:09:49.529 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:49.788 LIB libspdk_event_vhost_scsi.a 00:09:49.788 LIB libspdk_event_iscsi.a 00:09:49.788 SO libspdk_event_vhost_scsi.so.3.0 00:09:49.788 SO libspdk_event_iscsi.so.6.0 00:09:50.047 SYMLINK libspdk_event_vhost_scsi.so 00:09:50.047 SYMLINK libspdk_event_iscsi.so 00:09:50.047 SO libspdk.so.6.0 00:09:50.047 SYMLINK libspdk.so 00:09:50.305 TEST_HEADER include/spdk/accel.h 00:09:50.305 TEST_HEADER include/spdk/accel_module.h 00:09:50.305 CC app/trace_record/trace_record.o 00:09:50.305 TEST_HEADER include/spdk/assert.h 00:09:50.305 TEST_HEADER include/spdk/barrier.h 00:09:50.305 TEST_HEADER include/spdk/base64.h 00:09:50.305 CC test/rpc_client/rpc_client_test.o 00:09:50.305 TEST_HEADER include/spdk/bdev.h 00:09:50.305 CXX app/trace/trace.o 00:09:50.305 TEST_HEADER include/spdk/bdev_module.h 00:09:50.305 TEST_HEADER include/spdk/bdev_zone.h 00:09:50.305 TEST_HEADER include/spdk/bit_array.h 00:09:50.305 TEST_HEADER include/spdk/bit_pool.h 00:09:50.305 TEST_HEADER include/spdk/blob_bdev.h 00:09:50.305 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:50.305 TEST_HEADER include/spdk/blobfs.h 00:09:50.305 TEST_HEADER include/spdk/blob.h 00:09:50.305 TEST_HEADER include/spdk/conf.h 00:09:50.305 TEST_HEADER include/spdk/config.h 00:09:50.564 TEST_HEADER include/spdk/cpuset.h 00:09:50.564 TEST_HEADER include/spdk/crc16.h 00:09:50.564 TEST_HEADER include/spdk/crc32.h 00:09:50.564 TEST_HEADER include/spdk/crc64.h 00:09:50.564 TEST_HEADER include/spdk/dif.h 00:09:50.564 TEST_HEADER include/spdk/dma.h 00:09:50.564 TEST_HEADER include/spdk/endian.h 00:09:50.564 TEST_HEADER include/spdk/env_dpdk.h 00:09:50.564 CC app/nvmf_tgt/nvmf_main.o 00:09:50.564 TEST_HEADER include/spdk/env.h 00:09:50.564 TEST_HEADER include/spdk/event.h 00:09:50.564 TEST_HEADER include/spdk/fd_group.h 00:09:50.564 TEST_HEADER include/spdk/fd.h 00:09:50.564 TEST_HEADER include/spdk/file.h 00:09:50.564 TEST_HEADER include/spdk/fsdev.h 00:09:50.564 TEST_HEADER include/spdk/fsdev_module.h 00:09:50.564 TEST_HEADER include/spdk/ftl.h 00:09:50.564 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:50.564 TEST_HEADER include/spdk/gpt_spec.h 00:09:50.564 TEST_HEADER include/spdk/hexlify.h 00:09:50.564 TEST_HEADER include/spdk/histogram_data.h 00:09:50.564 TEST_HEADER include/spdk/idxd.h 00:09:50.564 TEST_HEADER include/spdk/idxd_spec.h 00:09:50.564 TEST_HEADER include/spdk/init.h 00:09:50.564 TEST_HEADER include/spdk/ioat.h 00:09:50.564 CC examples/util/zipf/zipf.o 00:09:50.564 TEST_HEADER include/spdk/ioat_spec.h 00:09:50.564 CC test/thread/poller_perf/poller_perf.o 00:09:50.564 TEST_HEADER include/spdk/iscsi_spec.h 00:09:50.564 TEST_HEADER include/spdk/json.h 00:09:50.564 TEST_HEADER include/spdk/jsonrpc.h 00:09:50.564 TEST_HEADER include/spdk/keyring.h 00:09:50.564 TEST_HEADER include/spdk/keyring_module.h 00:09:50.564 TEST_HEADER include/spdk/likely.h 00:09:50.564 TEST_HEADER include/spdk/log.h 00:09:50.564 TEST_HEADER include/spdk/lvol.h 00:09:50.564 TEST_HEADER include/spdk/md5.h 00:09:50.564 TEST_HEADER include/spdk/memory.h 00:09:50.564 TEST_HEADER include/spdk/mmio.h 00:09:50.564 TEST_HEADER include/spdk/nbd.h 00:09:50.564 TEST_HEADER include/spdk/net.h 00:09:50.564 CC test/dma/test_dma/test_dma.o 00:09:50.564 TEST_HEADER include/spdk/notify.h 00:09:50.564 TEST_HEADER include/spdk/nvme.h 00:09:50.564 TEST_HEADER include/spdk/nvme_intel.h 00:09:50.564 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:50.564 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:50.564 TEST_HEADER include/spdk/nvme_spec.h 00:09:50.564 TEST_HEADER include/spdk/nvme_zns.h 00:09:50.564 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:50.564 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:50.564 CC test/app/bdev_svc/bdev_svc.o 00:09:50.564 TEST_HEADER include/spdk/nvmf.h 00:09:50.564 TEST_HEADER include/spdk/nvmf_spec.h 00:09:50.564 TEST_HEADER include/spdk/nvmf_transport.h 00:09:50.564 TEST_HEADER include/spdk/opal.h 00:09:50.564 TEST_HEADER include/spdk/opal_spec.h 00:09:50.564 TEST_HEADER include/spdk/pci_ids.h 00:09:50.564 TEST_HEADER include/spdk/pipe.h 00:09:50.564 TEST_HEADER include/spdk/queue.h 00:09:50.564 TEST_HEADER include/spdk/reduce.h 00:09:50.564 TEST_HEADER include/spdk/rpc.h 00:09:50.564 TEST_HEADER include/spdk/scheduler.h 00:09:50.564 TEST_HEADER include/spdk/scsi.h 00:09:50.564 TEST_HEADER include/spdk/scsi_spec.h 00:09:50.564 TEST_HEADER include/spdk/sock.h 00:09:50.564 CC test/env/mem_callbacks/mem_callbacks.o 00:09:50.564 TEST_HEADER include/spdk/stdinc.h 00:09:50.564 TEST_HEADER include/spdk/string.h 00:09:50.564 TEST_HEADER include/spdk/thread.h 00:09:50.564 TEST_HEADER include/spdk/trace.h 00:09:50.564 TEST_HEADER include/spdk/trace_parser.h 00:09:50.564 TEST_HEADER include/spdk/tree.h 00:09:50.564 TEST_HEADER include/spdk/ublk.h 00:09:50.564 TEST_HEADER include/spdk/util.h 00:09:50.564 TEST_HEADER include/spdk/uuid.h 00:09:50.564 TEST_HEADER include/spdk/version.h 00:09:50.564 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:50.564 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:50.564 TEST_HEADER include/spdk/vhost.h 00:09:50.564 TEST_HEADER include/spdk/vmd.h 00:09:50.564 TEST_HEADER include/spdk/xor.h 00:09:50.564 TEST_HEADER include/spdk/zipf.h 00:09:50.564 CXX test/cpp_headers/accel.o 00:09:50.564 LINK rpc_client_test 00:09:50.822 LINK nvmf_tgt 00:09:50.822 LINK poller_perf 00:09:50.822 LINK zipf 00:09:50.822 LINK spdk_trace_record 00:09:50.822 CXX test/cpp_headers/accel_module.o 00:09:50.822 LINK bdev_svc 00:09:51.080 LINK spdk_trace 00:09:51.080 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:51.080 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:51.080 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:51.080 CXX test/cpp_headers/assert.o 00:09:51.080 CXX test/cpp_headers/barrier.o 00:09:51.080 CC examples/ioat/perf/perf.o 00:09:51.080 CC examples/vmd/lsvmd/lsvmd.o 00:09:51.080 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:51.080 LINK test_dma 00:09:51.338 CXX test/cpp_headers/base64.o 00:09:51.338 LINK mem_callbacks 00:09:51.338 LINK lsvmd 00:09:51.338 CC app/iscsi_tgt/iscsi_tgt.o 00:09:51.338 CC examples/ioat/verify/verify.o 00:09:51.338 LINK ioat_perf 00:09:51.338 CXX test/cpp_headers/bdev.o 00:09:51.597 CC test/env/vtophys/vtophys.o 00:09:51.597 LINK nvme_fuzz 00:09:51.597 CC examples/vmd/led/led.o 00:09:51.597 LINK iscsi_tgt 00:09:51.597 LINK verify 00:09:51.597 CC examples/idxd/perf/perf.o 00:09:51.597 CXX test/cpp_headers/bdev_module.o 00:09:51.597 LINK vtophys 00:09:51.597 CC app/spdk_tgt/spdk_tgt.o 00:09:51.597 LINK vhost_fuzz 00:09:51.854 LINK led 00:09:51.854 CXX test/cpp_headers/bdev_zone.o 00:09:51.854 CC app/spdk_lspci/spdk_lspci.o 00:09:51.854 CXX test/cpp_headers/bit_array.o 00:09:51.854 CC app/spdk_nvme_perf/perf.o 00:09:51.854 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:51.854 LINK spdk_tgt 00:09:51.854 LINK spdk_lspci 00:09:51.854 CC test/env/memory/memory_ut.o 00:09:52.112 CC test/env/pci/pci_ut.o 00:09:52.112 CXX test/cpp_headers/bit_pool.o 00:09:52.112 LINK idxd_perf 00:09:52.112 LINK env_dpdk_post_init 00:09:52.112 CC test/app/histogram_perf/histogram_perf.o 00:09:52.112 CXX test/cpp_headers/blob_bdev.o 00:09:52.112 CXX test/cpp_headers/blobfs_bdev.o 00:09:52.371 LINK histogram_perf 00:09:52.371 CXX test/cpp_headers/blobfs.o 00:09:52.371 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:52.371 CC app/spdk_nvme_identify/identify.o 00:09:52.371 CC examples/thread/thread/thread_ex.o 00:09:52.629 CC test/event/event_perf/event_perf.o 00:09:52.629 LINK pci_ut 00:09:52.629 CC test/event/reactor/reactor.o 00:09:52.629 CXX test/cpp_headers/blob.o 00:09:52.629 LINK interrupt_tgt 00:09:52.629 LINK event_perf 00:09:52.629 LINK reactor 00:09:52.921 LINK thread 00:09:52.921 CXX test/cpp_headers/conf.o 00:09:52.921 CC test/app/jsoncat/jsoncat.o 00:09:52.921 CC test/event/reactor_perf/reactor_perf.o 00:09:52.921 CC examples/sock/hello_world/hello_sock.o 00:09:52.921 CC test/event/app_repeat/app_repeat.o 00:09:52.921 CXX test/cpp_headers/config.o 00:09:52.921 LINK spdk_nvme_perf 00:09:52.921 CXX test/cpp_headers/cpuset.o 00:09:53.179 LINK jsoncat 00:09:53.179 LINK reactor_perf 00:09:53.179 LINK app_repeat 00:09:53.179 CXX test/cpp_headers/crc16.o 00:09:53.179 CC examples/accel/perf/accel_perf.o 00:09:53.179 LINK iscsi_fuzz 00:09:53.179 LINK hello_sock 00:09:53.437 CC test/event/scheduler/scheduler.o 00:09:53.437 LINK memory_ut 00:09:53.437 CXX test/cpp_headers/crc32.o 00:09:53.437 CC examples/blob/hello_world/hello_blob.o 00:09:53.437 CC examples/blob/cli/blobcli.o 00:09:53.437 CC test/nvme/aer/aer.o 00:09:53.437 LINK spdk_nvme_identify 00:09:53.695 CC test/app/stub/stub.o 00:09:53.695 CXX test/cpp_headers/crc64.o 00:09:53.695 LINK scheduler 00:09:53.695 CC test/accel/dif/dif.o 00:09:53.695 LINK hello_blob 00:09:53.695 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:53.695 CXX test/cpp_headers/dif.o 00:09:53.695 LINK stub 00:09:53.953 CC app/spdk_nvme_discover/discovery_aer.o 00:09:53.953 LINK aer 00:09:53.953 LINK accel_perf 00:09:53.953 CC app/spdk_top/spdk_top.o 00:09:53.953 CXX test/cpp_headers/dma.o 00:09:53.953 CC test/nvme/reset/reset.o 00:09:54.211 CC test/nvme/sgl/sgl.o 00:09:54.211 LINK spdk_nvme_discover 00:09:54.211 LINK blobcli 00:09:54.211 LINK hello_fsdev 00:09:54.211 CXX test/cpp_headers/endian.o 00:09:54.211 CC app/vhost/vhost.o 00:09:54.211 CC app/spdk_dd/spdk_dd.o 00:09:54.469 CC test/nvme/e2edp/nvme_dp.o 00:09:54.469 LINK reset 00:09:54.469 CXX test/cpp_headers/env_dpdk.o 00:09:54.469 LINK vhost 00:09:54.469 LINK sgl 00:09:54.469 CC app/fio/nvme/fio_plugin.o 00:09:54.469 CC examples/nvme/hello_world/hello_world.o 00:09:54.728 CXX test/cpp_headers/env.o 00:09:54.728 LINK dif 00:09:54.728 CC test/nvme/overhead/overhead.o 00:09:54.728 LINK nvme_dp 00:09:54.728 LINK spdk_dd 00:09:54.728 CC test/nvme/err_injection/err_injection.o 00:09:54.728 CC test/nvme/startup/startup.o 00:09:54.728 CXX test/cpp_headers/event.o 00:09:54.728 LINK hello_world 00:09:54.986 CC test/nvme/reserve/reserve.o 00:09:54.986 CC test/nvme/simple_copy/simple_copy.o 00:09:54.986 LINK err_injection 00:09:54.986 LINK startup 00:09:54.986 CC test/nvme/connect_stress/connect_stress.o 00:09:54.986 CXX test/cpp_headers/fd_group.o 00:09:54.986 LINK overhead 00:09:55.245 CC examples/nvme/reconnect/reconnect.o 00:09:55.245 LINK spdk_top 00:09:55.245 CXX test/cpp_headers/fd.o 00:09:55.245 LINK spdk_nvme 00:09:55.245 LINK reserve 00:09:55.245 LINK connect_stress 00:09:55.245 CC test/nvme/boot_partition/boot_partition.o 00:09:55.504 LINK simple_copy 00:09:55.504 CC test/nvme/compliance/nvme_compliance.o 00:09:55.504 CC app/fio/bdev/fio_plugin.o 00:09:55.504 CXX test/cpp_headers/file.o 00:09:55.504 CC test/nvme/fused_ordering/fused_ordering.o 00:09:55.504 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:55.504 LINK boot_partition 00:09:55.504 CC test/nvme/fdp/fdp.o 00:09:55.504 CC test/nvme/cuse/cuse.o 00:09:55.762 LINK reconnect 00:09:55.762 CXX test/cpp_headers/fsdev.o 00:09:55.762 LINK doorbell_aers 00:09:55.762 LINK fused_ordering 00:09:55.762 CC examples/bdev/hello_world/hello_bdev.o 00:09:55.762 LINK nvme_compliance 00:09:55.762 CC examples/bdev/bdevperf/bdevperf.o 00:09:55.762 CXX test/cpp_headers/fsdev_module.o 00:09:56.021 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:56.021 CC examples/nvme/arbitration/arbitration.o 00:09:56.021 LINK fdp 00:09:56.021 LINK spdk_bdev 00:09:56.021 LINK hello_bdev 00:09:56.021 CXX test/cpp_headers/ftl.o 00:09:56.021 CC examples/nvme/hotplug/hotplug.o 00:09:56.021 CC test/blobfs/mkfs/mkfs.o 00:09:56.280 CC examples/nvme/abort/abort.o 00:09:56.280 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:56.280 CXX test/cpp_headers/fuse_dispatcher.o 00:09:56.280 LINK mkfs 00:09:56.280 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:56.280 LINK hotplug 00:09:56.280 LINK arbitration 00:09:56.538 LINK cmb_copy 00:09:56.538 CXX test/cpp_headers/gpt_spec.o 00:09:56.538 CXX test/cpp_headers/hexlify.o 00:09:56.538 LINK nvme_manage 00:09:56.538 LINK pmr_persistence 00:09:56.538 CXX test/cpp_headers/histogram_data.o 00:09:56.797 CXX test/cpp_headers/idxd.o 00:09:56.797 LINK abort 00:09:56.797 CXX test/cpp_headers/idxd_spec.o 00:09:56.797 CXX test/cpp_headers/init.o 00:09:56.797 CXX test/cpp_headers/ioat.o 00:09:56.797 CC test/bdev/bdevio/bdevio.o 00:09:56.797 CXX test/cpp_headers/ioat_spec.o 00:09:56.797 CC test/lvol/esnap/esnap.o 00:09:56.797 CXX test/cpp_headers/iscsi_spec.o 00:09:56.797 CXX test/cpp_headers/json.o 00:09:56.797 CXX test/cpp_headers/jsonrpc.o 00:09:56.797 CXX test/cpp_headers/keyring.o 00:09:56.797 CXX test/cpp_headers/keyring_module.o 00:09:57.055 LINK bdevperf 00:09:57.055 CXX test/cpp_headers/likely.o 00:09:57.055 CXX test/cpp_headers/log.o 00:09:57.055 CXX test/cpp_headers/lvol.o 00:09:57.055 CXX test/cpp_headers/md5.o 00:09:57.055 CXX test/cpp_headers/memory.o 00:09:57.055 CXX test/cpp_headers/mmio.o 00:09:57.055 CXX test/cpp_headers/nbd.o 00:09:57.314 CXX test/cpp_headers/net.o 00:09:57.314 LINK cuse 00:09:57.314 CXX test/cpp_headers/notify.o 00:09:57.314 LINK bdevio 00:09:57.314 CXX test/cpp_headers/nvme.o 00:09:57.314 CXX test/cpp_headers/nvme_intel.o 00:09:57.314 CXX test/cpp_headers/nvme_ocssd.o 00:09:57.314 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:57.314 CXX test/cpp_headers/nvme_spec.o 00:09:57.314 CC examples/nvmf/nvmf/nvmf.o 00:09:57.314 CXX test/cpp_headers/nvme_zns.o 00:09:57.314 CXX test/cpp_headers/nvmf_cmd.o 00:09:57.573 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:57.573 CXX test/cpp_headers/nvmf.o 00:09:57.573 CXX test/cpp_headers/nvmf_spec.o 00:09:57.573 CXX test/cpp_headers/nvmf_transport.o 00:09:57.573 CXX test/cpp_headers/opal.o 00:09:57.573 CXX test/cpp_headers/opal_spec.o 00:09:57.573 CXX test/cpp_headers/pci_ids.o 00:09:57.573 CXX test/cpp_headers/pipe.o 00:09:57.573 CXX test/cpp_headers/queue.o 00:09:57.573 CXX test/cpp_headers/reduce.o 00:09:57.573 CXX test/cpp_headers/rpc.o 00:09:57.573 CXX test/cpp_headers/scheduler.o 00:09:57.831 CXX test/cpp_headers/scsi.o 00:09:57.831 CXX test/cpp_headers/scsi_spec.o 00:09:57.831 LINK nvmf 00:09:57.831 CXX test/cpp_headers/sock.o 00:09:57.831 CXX test/cpp_headers/stdinc.o 00:09:57.831 CXX test/cpp_headers/string.o 00:09:57.831 CXX test/cpp_headers/thread.o 00:09:57.831 CXX test/cpp_headers/trace.o 00:09:57.831 CXX test/cpp_headers/trace_parser.o 00:09:57.831 CXX test/cpp_headers/tree.o 00:09:57.831 CXX test/cpp_headers/ublk.o 00:09:58.089 CXX test/cpp_headers/util.o 00:09:58.089 CXX test/cpp_headers/uuid.o 00:09:58.089 CXX test/cpp_headers/version.o 00:09:58.089 CXX test/cpp_headers/vfio_user_pci.o 00:09:58.089 CXX test/cpp_headers/vfio_user_spec.o 00:09:58.089 CXX test/cpp_headers/vhost.o 00:09:58.089 CXX test/cpp_headers/vmd.o 00:09:58.089 CXX test/cpp_headers/xor.o 00:09:58.089 CXX test/cpp_headers/zipf.o 00:10:04.651 LINK esnap 00:10:04.651 00:10:04.651 real 1m40.966s 00:10:04.651 user 9m5.288s 00:10:04.651 sys 1m47.233s 00:10:04.651 14:42:34 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:10:04.651 14:42:34 make -- common/autotest_common.sh@10 -- $ set +x 00:10:04.651 ************************************ 00:10:04.651 END TEST make 00:10:04.651 ************************************ 00:10:04.651 14:42:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:04.651 14:42:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:04.651 14:42:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:04.651 14:42:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.651 14:42:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:04.651 14:42:34 -- pm/common@44 -- $ pid=5307 00:10:04.651 14:42:34 -- pm/common@50 -- $ kill -TERM 5307 00:10:04.651 14:42:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.651 14:42:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:04.651 14:42:34 -- pm/common@44 -- $ pid=5309 00:10:04.651 14:42:34 -- pm/common@50 -- $ kill -TERM 5309 00:10:04.651 14:42:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:10:04.651 14:42:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:04.910 14:42:34 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.910 14:42:34 -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.910 14:42:34 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.910 14:42:34 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.910 14:42:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.910 14:42:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.910 14:42:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.910 14:42:34 -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.910 14:42:34 -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.910 14:42:34 -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.910 14:42:34 -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.910 14:42:34 -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.910 14:42:34 -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.910 14:42:34 -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.910 14:42:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.910 14:42:34 -- scripts/common.sh@344 -- # case "$op" in 00:10:04.910 14:42:34 -- scripts/common.sh@345 -- # : 1 00:10:04.910 14:42:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.910 14:42:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.910 14:42:34 -- scripts/common.sh@365 -- # decimal 1 00:10:04.910 14:42:34 -- scripts/common.sh@353 -- # local d=1 00:10:04.910 14:42:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.910 14:42:34 -- scripts/common.sh@355 -- # echo 1 00:10:04.910 14:42:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.910 14:42:34 -- scripts/common.sh@366 -- # decimal 2 00:10:04.910 14:42:34 -- scripts/common.sh@353 -- # local d=2 00:10:04.910 14:42:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.910 14:42:34 -- scripts/common.sh@355 -- # echo 2 00:10:04.910 14:42:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.910 14:42:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.910 14:42:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.910 14:42:34 -- scripts/common.sh@368 -- # return 0 00:10:04.910 14:42:34 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.910 14:42:34 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.910 --rc genhtml_branch_coverage=1 00:10:04.910 --rc genhtml_function_coverage=1 00:10:04.910 --rc genhtml_legend=1 00:10:04.910 --rc geninfo_all_blocks=1 00:10:04.910 --rc geninfo_unexecuted_blocks=1 00:10:04.910 00:10:04.910 ' 00:10:04.910 14:42:34 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.910 --rc genhtml_branch_coverage=1 00:10:04.910 --rc genhtml_function_coverage=1 00:10:04.910 --rc genhtml_legend=1 00:10:04.910 --rc geninfo_all_blocks=1 00:10:04.910 --rc geninfo_unexecuted_blocks=1 00:10:04.910 00:10:04.910 ' 00:10:04.910 14:42:34 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.910 --rc genhtml_branch_coverage=1 00:10:04.910 --rc genhtml_function_coverage=1 00:10:04.910 --rc genhtml_legend=1 00:10:04.910 --rc geninfo_all_blocks=1 00:10:04.910 --rc geninfo_unexecuted_blocks=1 00:10:04.910 00:10:04.910 ' 00:10:04.910 14:42:34 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.910 --rc genhtml_branch_coverage=1 00:10:04.910 --rc genhtml_function_coverage=1 00:10:04.910 --rc genhtml_legend=1 00:10:04.910 --rc geninfo_all_blocks=1 00:10:04.910 --rc geninfo_unexecuted_blocks=1 00:10:04.910 00:10:04.910 ' 00:10:04.910 14:42:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.910 14:42:34 -- nvmf/common.sh@7 -- # uname -s 00:10:04.910 14:42:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.910 14:42:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.910 14:42:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.910 14:42:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.910 14:42:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.910 14:42:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.910 14:42:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.910 14:42:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.910 14:42:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.910 14:42:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.910 14:42:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9048918-6b2b-48d9-9a25-8aa126fad89b 00:10:04.910 14:42:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9048918-6b2b-48d9-9a25-8aa126fad89b 00:10:04.910 14:42:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.910 14:42:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.910 14:42:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:04.910 14:42:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.910 14:42:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.910 14:42:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.910 14:42:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.910 14:42:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.910 14:42:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.910 14:42:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.910 14:42:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.910 14:42:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.910 14:42:34 -- paths/export.sh@5 -- # export PATH 00:10:04.910 14:42:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.910 14:42:34 -- nvmf/common.sh@51 -- # : 0 00:10:04.910 14:42:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.910 14:42:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.910 14:42:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.910 14:42:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.910 14:42:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.910 14:42:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.910 14:42:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.910 14:42:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.910 14:42:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.910 14:42:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:04.910 14:42:34 -- spdk/autotest.sh@32 -- # uname -s 00:10:04.910 14:42:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:04.910 14:42:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:04.910 14:42:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:04.910 14:42:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:04.910 14:42:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:04.910 14:42:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:04.910 14:42:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:04.910 14:42:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:04.910 14:42:34 -- spdk/autotest.sh@48 -- # udevadm_pid=54387 00:10:04.910 14:42:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:04.910 14:42:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:04.910 14:42:34 -- pm/common@17 -- # local monitor 00:10:04.910 14:42:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.910 14:42:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.910 14:42:34 -- pm/common@25 -- # sleep 1 00:10:04.910 14:42:34 -- pm/common@21 -- # date +%s 00:10:04.910 14:42:34 -- pm/common@21 -- # date +%s 00:10:04.910 14:42:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730731354 00:10:04.910 14:42:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730731354 00:10:05.169 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730731354_collect-vmstat.pm.log 00:10:05.169 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730731354_collect-cpu-load.pm.log 00:10:06.106 14:42:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:06.106 14:42:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:06.106 14:42:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.106 14:42:35 -- common/autotest_common.sh@10 -- # set +x 00:10:06.106 14:42:35 -- spdk/autotest.sh@59 -- # create_test_list 00:10:06.106 14:42:35 -- common/autotest_common.sh@750 -- # xtrace_disable 00:10:06.106 14:42:35 -- common/autotest_common.sh@10 -- # set +x 00:10:06.106 14:42:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:06.106 14:42:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:06.106 14:42:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:06.106 14:42:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:06.106 14:42:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:06.106 14:42:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:06.106 14:42:35 -- common/autotest_common.sh@1455 -- # uname 00:10:06.106 14:42:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:10:06.106 14:42:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:06.106 14:42:35 -- common/autotest_common.sh@1475 -- # uname 00:10:06.106 14:42:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:10:06.106 14:42:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:10:06.106 14:42:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:10:06.106 lcov: LCOV version 1.15 00:10:06.106 14:42:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:24.188 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:24.188 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:42.295 14:43:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:42.295 14:43:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.295 14:43:11 -- common/autotest_common.sh@10 -- # set +x 00:10:42.295 14:43:11 -- spdk/autotest.sh@78 -- # rm -f 00:10:42.295 14:43:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:42.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.295 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:42.295 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:42.295 14:43:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:42.295 14:43:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:42.295 14:43:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:42.295 14:43:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:42.295 14:43:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:42.295 14:43:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:42.295 14:43:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:42.295 14:43:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:42.295 14:43:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:42.295 14:43:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:42.295 14:43:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:42.295 14:43:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:10:42.295 14:43:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:10:42.295 14:43:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:42.295 14:43:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:10:42.295 14:43:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:10:42.295 14:43:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:42.295 14:43:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:42.295 14:43:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:42.295 14:43:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:42.295 14:43:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:42.295 14:43:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:42.295 14:43:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:42.295 14:43:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:42.295 No valid GPT data, bailing 00:10:42.295 14:43:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:42.295 14:43:12 -- scripts/common.sh@394 -- # pt= 00:10:42.295 14:43:12 -- scripts/common.sh@395 -- # return 1 00:10:42.295 14:43:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:42.295 1+0 records in 00:10:42.295 1+0 records out 00:10:42.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543539 s, 193 MB/s 00:10:42.295 14:43:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:42.295 14:43:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:42.295 14:43:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:42.295 14:43:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:42.295 14:43:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:42.553 No valid GPT data, bailing 00:10:42.553 14:43:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:42.553 14:43:12 -- scripts/common.sh@394 -- # pt= 00:10:42.553 14:43:12 -- scripts/common.sh@395 -- # return 1 00:10:42.553 14:43:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:42.553 1+0 records in 00:10:42.553 1+0 records out 00:10:42.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448093 s, 234 MB/s 00:10:42.553 14:43:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:42.553 14:43:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:42.553 14:43:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:10:42.553 14:43:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:10:42.553 14:43:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:42.553 No valid GPT data, bailing 00:10:42.553 14:43:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:42.553 14:43:12 -- scripts/common.sh@394 -- # pt= 00:10:42.553 14:43:12 -- scripts/common.sh@395 -- # return 1 00:10:42.553 14:43:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:42.553 1+0 records in 00:10:42.553 1+0 records out 00:10:42.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445981 s, 235 MB/s 00:10:42.553 14:43:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:42.553 14:43:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:42.553 14:43:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:10:42.553 14:43:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:10:42.553 14:43:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:42.553 No valid GPT data, bailing 00:10:42.553 14:43:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:42.553 14:43:12 -- scripts/common.sh@394 -- # pt= 00:10:42.553 14:43:12 -- scripts/common.sh@395 -- # return 1 00:10:42.553 14:43:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:42.553 1+0 records in 00:10:42.553 1+0 records out 00:10:42.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401011 s, 261 MB/s 00:10:42.553 14:43:12 -- spdk/autotest.sh@105 -- # sync 00:10:42.812 14:43:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:42.812 14:43:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:42.812 14:43:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:44.712 14:43:14 -- spdk/autotest.sh@111 -- # uname -s 00:10:44.712 14:43:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:44.712 14:43:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:44.712 14:43:14 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:45.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.276 Hugepages 00:10:45.276 node hugesize free / total 00:10:45.276 node0 1048576kB 0 / 0 00:10:45.276 node0 2048kB 0 / 0 00:10:45.276 00:10:45.276 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:45.276 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:45.534 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:45.534 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:45.534 14:43:15 -- spdk/autotest.sh@117 -- # uname -s 00:10:45.534 14:43:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:45.534 14:43:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:45.534 14:43:15 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:46.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.356 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.356 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.356 14:43:16 -- common/autotest_common.sh@1515 -- # sleep 1 00:10:47.290 14:43:17 -- common/autotest_common.sh@1516 -- # bdfs=() 00:10:47.290 14:43:17 -- common/autotest_common.sh@1516 -- # local bdfs 00:10:47.290 14:43:17 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:10:47.290 14:43:17 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:10:47.290 14:43:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:47.290 14:43:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:47.290 14:43:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:47.290 14:43:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:47.290 14:43:17 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:47.548 14:43:17 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:10:47.548 14:43:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:47.548 14:43:17 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:47.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.807 Waiting for block devices as requested 00:10:47.807 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.807 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.065 14:43:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:48.065 14:43:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:10:48.065 14:43:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:48.065 14:43:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:48.065 14:43:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:48.065 14:43:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1541 -- # continue 00:10:48.065 14:43:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:48.065 14:43:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:48.065 14:43:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:10:48.065 14:43:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:48.065 14:43:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:48.065 14:43:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:48.065 14:43:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:48.065 14:43:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:48.065 14:43:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:48.065 14:43:17 -- common/autotest_common.sh@1541 -- # continue 00:10:48.065 14:43:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:48.065 14:43:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.065 14:43:17 -- common/autotest_common.sh@10 -- # set +x 00:10:48.065 14:43:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:48.065 14:43:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.065 14:43:17 -- common/autotest_common.sh@10 -- # set +x 00:10:48.065 14:43:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:48.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.891 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:48.891 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:48.891 14:43:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:48.891 14:43:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.891 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.891 14:43:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:48.891 14:43:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:10:48.891 14:43:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:10:48.891 14:43:18 -- common/autotest_common.sh@1561 -- # bdfs=() 00:10:48.891 14:43:18 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:10:48.891 14:43:18 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:10:48.891 14:43:18 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:10:48.891 14:43:18 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:10:48.891 14:43:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:48.891 14:43:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:48.891 14:43:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:48.891 14:43:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:48.891 14:43:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:49.149 14:43:18 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:10:49.149 14:43:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:49.149 14:43:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:49.149 14:43:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:49.149 14:43:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:49.149 14:43:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:49.149 14:43:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:49.149 14:43:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:49.149 14:43:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:49.149 14:43:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:49.149 14:43:18 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:10:49.149 14:43:18 -- common/autotest_common.sh@1570 -- # return 0 00:10:49.149 14:43:18 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:10:49.149 14:43:18 -- common/autotest_common.sh@1578 -- # return 0 00:10:49.149 14:43:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:49.149 14:43:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:49.149 14:43:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:49.149 14:43:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:49.149 14:43:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:49.149 14:43:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.149 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.149 14:43:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:49.149 14:43:18 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:49.149 14:43:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:49.149 14:43:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.149 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.149 ************************************ 00:10:49.149 START TEST env 00:10:49.149 ************************************ 00:10:49.149 14:43:18 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:49.149 * Looking for test storage... 00:10:49.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:49.149 14:43:18 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:49.149 14:43:18 env -- common/autotest_common.sh@1691 -- # lcov --version 00:10:49.149 14:43:18 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:49.444 14:43:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.444 14:43:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.444 14:43:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.444 14:43:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.444 14:43:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.444 14:43:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.444 14:43:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.444 14:43:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.444 14:43:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.444 14:43:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.444 14:43:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.444 14:43:19 env -- scripts/common.sh@344 -- # case "$op" in 00:10:49.444 14:43:19 env -- scripts/common.sh@345 -- # : 1 00:10:49.444 14:43:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.444 14:43:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.444 14:43:19 env -- scripts/common.sh@365 -- # decimal 1 00:10:49.444 14:43:19 env -- scripts/common.sh@353 -- # local d=1 00:10:49.444 14:43:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.444 14:43:19 env -- scripts/common.sh@355 -- # echo 1 00:10:49.444 14:43:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.444 14:43:19 env -- scripts/common.sh@366 -- # decimal 2 00:10:49.444 14:43:19 env -- scripts/common.sh@353 -- # local d=2 00:10:49.444 14:43:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.444 14:43:19 env -- scripts/common.sh@355 -- # echo 2 00:10:49.444 14:43:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.444 14:43:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.444 14:43:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.444 14:43:19 env -- scripts/common.sh@368 -- # return 0 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.444 --rc genhtml_branch_coverage=1 00:10:49.444 --rc genhtml_function_coverage=1 00:10:49.444 --rc genhtml_legend=1 00:10:49.444 --rc geninfo_all_blocks=1 00:10:49.444 --rc geninfo_unexecuted_blocks=1 00:10:49.444 00:10:49.444 ' 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.444 --rc genhtml_branch_coverage=1 00:10:49.444 --rc genhtml_function_coverage=1 00:10:49.444 --rc genhtml_legend=1 00:10:49.444 --rc geninfo_all_blocks=1 00:10:49.444 --rc geninfo_unexecuted_blocks=1 00:10:49.444 00:10:49.444 ' 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.444 --rc genhtml_branch_coverage=1 00:10:49.444 --rc genhtml_function_coverage=1 00:10:49.444 --rc genhtml_legend=1 00:10:49.444 --rc geninfo_all_blocks=1 00:10:49.444 --rc geninfo_unexecuted_blocks=1 00:10:49.444 00:10:49.444 ' 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.444 --rc genhtml_branch_coverage=1 00:10:49.444 --rc genhtml_function_coverage=1 00:10:49.444 --rc genhtml_legend=1 00:10:49.444 --rc geninfo_all_blocks=1 00:10:49.444 --rc geninfo_unexecuted_blocks=1 00:10:49.444 00:10:49.444 ' 00:10:49.444 14:43:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:49.444 14:43:19 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.444 14:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:10:49.444 ************************************ 00:10:49.444 START TEST env_memory 00:10:49.444 ************************************ 00:10:49.444 14:43:19 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:49.444 00:10:49.444 00:10:49.444 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.444 http://cunit.sourceforge.net/ 00:10:49.444 00:10:49.444 00:10:49.444 Suite: memory 00:10:49.444 Test: alloc and free memory map ...[2024-11-04 14:43:19.140587] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:49.444 passed 00:10:49.444 Test: mem map translation ...[2024-11-04 14:43:19.203732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:49.444 [2024-11-04 14:43:19.203859] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:49.444 [2024-11-04 14:43:19.203966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:49.444 [2024-11-04 14:43:19.204002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:49.444 passed 00:10:49.444 Test: mem map registration ...[2024-11-04 14:43:19.302954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:49.444 [2024-11-04 14:43:19.303050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:49.704 passed 00:10:49.704 Test: mem map adjacent registrations ...passed 00:10:49.704 00:10:49.704 Run Summary: Type Total Ran Passed Failed Inactive 00:10:49.704 suites 1 1 n/a 0 0 00:10:49.704 tests 4 4 4 0 0 00:10:49.704 asserts 152 152 152 0 n/a 00:10:49.704 00:10:49.704 Elapsed time = 0.341 seconds 00:10:49.704 00:10:49.704 real 0m0.389s 00:10:49.704 user 0m0.349s 00:10:49.704 sys 0m0.030s 00:10:49.704 14:43:19 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:49.704 14:43:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:49.704 ************************************ 00:10:49.704 END TEST env_memory 00:10:49.704 ************************************ 00:10:49.704 14:43:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:49.704 14:43:19 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:49.704 14:43:19 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.704 14:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:10:49.704 ************************************ 00:10:49.704 START TEST env_vtophys 00:10:49.704 ************************************ 00:10:49.704 14:43:19 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:49.704 EAL: lib.eal log level changed from notice to debug 00:10:49.704 EAL: Detected lcore 0 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 1 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 2 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 3 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 4 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 5 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 6 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 7 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 8 as core 0 on socket 0 00:10:49.704 EAL: Detected lcore 9 as core 0 on socket 0 00:10:49.704 EAL: Maximum logical cores by configuration: 128 00:10:49.704 EAL: Detected CPU lcores: 10 00:10:49.704 EAL: Detected NUMA nodes: 1 00:10:49.704 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:49.704 EAL: Detected shared linkage of DPDK 00:10:49.976 EAL: No shared files mode enabled, IPC will be disabled 00:10:49.976 EAL: Selected IOVA mode 'PA' 00:10:49.976 EAL: Probing VFIO support... 00:10:49.976 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:49.976 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:49.976 EAL: Ask a virtual area of 0x2e000 bytes 00:10:49.976 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:49.976 EAL: Setting up physically contiguous memory... 00:10:49.976 EAL: Setting maximum number of open files to 524288 00:10:49.976 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:49.976 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:49.976 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.976 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:49.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.976 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.976 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:49.976 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:49.976 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.976 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:49.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.976 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.976 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:49.976 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:49.976 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.976 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:49.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.976 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.976 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:49.976 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:49.976 EAL: Ask a virtual area of 0x61000 bytes 00:10:49.976 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:49.976 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:49.976 EAL: Ask a virtual area of 0x400000000 bytes 00:10:49.976 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:49.976 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:49.976 EAL: Hugepages will be freed exactly as allocated. 00:10:49.976 EAL: No shared files mode enabled, IPC is disabled 00:10:49.976 EAL: No shared files mode enabled, IPC is disabled 00:10:49.976 EAL: TSC frequency is ~2200000 KHz 00:10:49.976 EAL: Main lcore 0 is ready (tid=7f093c2b6a40;cpuset=[0]) 00:10:49.976 EAL: Trying to obtain current memory policy. 00:10:49.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:49.977 EAL: Restoring previous memory policy: 0 00:10:49.977 EAL: request: mp_malloc_sync 00:10:49.977 EAL: No shared files mode enabled, IPC is disabled 00:10:49.977 EAL: Heap on socket 0 was expanded by 2MB 00:10:49.977 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:49.977 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:49.977 EAL: Mem event callback 'spdk:(nil)' registered 00:10:49.977 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:49.977 00:10:49.977 00:10:49.977 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.977 http://cunit.sourceforge.net/ 00:10:49.977 00:10:49.977 00:10:49.977 Suite: components_suite 00:10:50.545 Test: vtophys_malloc_test ...passed 00:10:50.545 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:50.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.545 EAL: Restoring previous memory policy: 4 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was expanded by 4MB 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was shrunk by 4MB 00:10:50.545 EAL: Trying to obtain current memory policy. 00:10:50.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.545 EAL: Restoring previous memory policy: 4 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was expanded by 6MB 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was shrunk by 6MB 00:10:50.545 EAL: Trying to obtain current memory policy. 00:10:50.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.545 EAL: Restoring previous memory policy: 4 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was expanded by 10MB 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was shrunk by 10MB 00:10:50.545 EAL: Trying to obtain current memory policy. 00:10:50.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.545 EAL: Restoring previous memory policy: 4 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was expanded by 18MB 00:10:50.545 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.545 EAL: request: mp_malloc_sync 00:10:50.545 EAL: No shared files mode enabled, IPC is disabled 00:10:50.545 EAL: Heap on socket 0 was shrunk by 18MB 00:10:50.545 EAL: Trying to obtain current memory policy. 00:10:50.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.803 EAL: Restoring previous memory policy: 4 00:10:50.803 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.803 EAL: request: mp_malloc_sync 00:10:50.803 EAL: No shared files mode enabled, IPC is disabled 00:10:50.803 EAL: Heap on socket 0 was expanded by 34MB 00:10:50.803 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.803 EAL: request: mp_malloc_sync 00:10:50.803 EAL: No shared files mode enabled, IPC is disabled 00:10:50.803 EAL: Heap on socket 0 was shrunk by 34MB 00:10:50.803 EAL: Trying to obtain current memory policy. 00:10:50.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:50.803 EAL: Restoring previous memory policy: 4 00:10:50.803 EAL: Calling mem event callback 'spdk:(nil)' 00:10:50.803 EAL: request: mp_malloc_sync 00:10:50.803 EAL: No shared files mode enabled, IPC is disabled 00:10:50.803 EAL: Heap on socket 0 was expanded by 66MB 00:10:50.803 EAL: Calling mem event callback 'spdk:(nil)' 00:10:51.061 EAL: request: mp_malloc_sync 00:10:51.061 EAL: No shared files mode enabled, IPC is disabled 00:10:51.061 EAL: Heap on socket 0 was shrunk by 66MB 00:10:51.061 EAL: Trying to obtain current memory policy. 00:10:51.061 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:51.061 EAL: Restoring previous memory policy: 4 00:10:51.061 EAL: Calling mem event callback 'spdk:(nil)' 00:10:51.061 EAL: request: mp_malloc_sync 00:10:51.061 EAL: No shared files mode enabled, IPC is disabled 00:10:51.061 EAL: Heap on socket 0 was expanded by 130MB 00:10:51.381 EAL: Calling mem event callback 'spdk:(nil)' 00:10:51.381 EAL: request: mp_malloc_sync 00:10:51.381 EAL: No shared files mode enabled, IPC is disabled 00:10:51.381 EAL: Heap on socket 0 was shrunk by 130MB 00:10:51.639 EAL: Trying to obtain current memory policy. 00:10:51.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:51.639 EAL: Restoring previous memory policy: 4 00:10:51.639 EAL: Calling mem event callback 'spdk:(nil)' 00:10:51.639 EAL: request: mp_malloc_sync 00:10:51.639 EAL: No shared files mode enabled, IPC is disabled 00:10:51.639 EAL: Heap on socket 0 was expanded by 258MB 00:10:52.207 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.207 EAL: request: mp_malloc_sync 00:10:52.207 EAL: No shared files mode enabled, IPC is disabled 00:10:52.207 EAL: Heap on socket 0 was shrunk by 258MB 00:10:52.465 EAL: Trying to obtain current memory policy. 00:10:52.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.723 EAL: Restoring previous memory policy: 4 00:10:52.723 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.723 EAL: request: mp_malloc_sync 00:10:52.723 EAL: No shared files mode enabled, IPC is disabled 00:10:52.723 EAL: Heap on socket 0 was expanded by 514MB 00:10:53.657 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.657 EAL: request: mp_malloc_sync 00:10:53.657 EAL: No shared files mode enabled, IPC is disabled 00:10:53.657 EAL: Heap on socket 0 was shrunk by 514MB 00:10:54.591 EAL: Trying to obtain current memory policy. 00:10:54.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:54.849 EAL: Restoring previous memory policy: 4 00:10:54.849 EAL: Calling mem event callback 'spdk:(nil)' 00:10:54.849 EAL: request: mp_malloc_sync 00:10:54.849 EAL: No shared files mode enabled, IPC is disabled 00:10:54.849 EAL: Heap on socket 0 was expanded by 1026MB 00:10:56.747 EAL: Calling mem event callback 'spdk:(nil)' 00:10:56.747 EAL: request: mp_malloc_sync 00:10:56.747 EAL: No shared files mode enabled, IPC is disabled 00:10:56.747 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:58.144 passed 00:10:58.144 00:10:58.144 Run Summary: Type Total Ran Passed Failed Inactive 00:10:58.144 suites 1 1 n/a 0 0 00:10:58.144 tests 2 2 2 0 0 00:10:58.144 asserts 5642 5642 5642 0 n/a 00:10:58.144 00:10:58.144 Elapsed time = 8.087 seconds 00:10:58.144 EAL: Calling mem event callback 'spdk:(nil)' 00:10:58.144 EAL: request: mp_malloc_sync 00:10:58.144 EAL: No shared files mode enabled, IPC is disabled 00:10:58.144 EAL: Heap on socket 0 was shrunk by 2MB 00:10:58.144 EAL: No shared files mode enabled, IPC is disabled 00:10:58.144 EAL: No shared files mode enabled, IPC is disabled 00:10:58.144 EAL: No shared files mode enabled, IPC is disabled 00:10:58.144 00:10:58.144 real 0m8.454s 00:10:58.144 user 0m7.052s 00:10:58.144 sys 0m1.219s 00:10:58.144 14:43:27 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.144 14:43:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 ************************************ 00:10:58.144 END TEST env_vtophys 00:10:58.144 ************************************ 00:10:58.144 14:43:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:58.144 14:43:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:58.144 14:43:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.144 14:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 ************************************ 00:10:58.144 START TEST env_pci 00:10:58.144 ************************************ 00:10:58.144 14:43:28 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:58.402 00:10:58.402 00:10:58.402 CUnit - A unit testing framework for C - Version 2.1-3 00:10:58.402 http://cunit.sourceforge.net/ 00:10:58.402 00:10:58.402 00:10:58.402 Suite: pci 00:10:58.402 Test: pci_hook ...[2024-11-04 14:43:28.047401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56765 has claimed it 00:10:58.402 passed 00:10:58.402 00:10:58.402 EAL: Cannot find device (10000:00:01.0) 00:10:58.402 EAL: Failed to attach device on primary process 00:10:58.402 Run Summary: Type Total Ran Passed Failed Inactive 00:10:58.402 suites 1 1 n/a 0 0 00:10:58.402 tests 1 1 1 0 0 00:10:58.402 asserts 25 25 25 0 n/a 00:10:58.402 00:10:58.402 Elapsed time = 0.008 seconds 00:10:58.402 00:10:58.402 real 0m0.078s 00:10:58.402 user 0m0.030s 00:10:58.402 sys 0m0.046s 00:10:58.402 14:43:28 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.402 14:43:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 ************************************ 00:10:58.402 END TEST env_pci 00:10:58.402 ************************************ 00:10:58.402 14:43:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:58.402 14:43:28 env -- env/env.sh@15 -- # uname 00:10:58.402 14:43:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:58.402 14:43:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:58.402 14:43:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:58.402 14:43:28 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:58.402 14:43:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.402 14:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 ************************************ 00:10:58.402 START TEST env_dpdk_post_init 00:10:58.402 ************************************ 00:10:58.402 14:43:28 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:58.402 EAL: Detected CPU lcores: 10 00:10:58.402 EAL: Detected NUMA nodes: 1 00:10:58.402 EAL: Detected shared linkage of DPDK 00:10:58.402 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:58.402 EAL: Selected IOVA mode 'PA' 00:10:58.660 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:58.660 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:58.660 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:58.660 Starting DPDK initialization... 00:10:58.660 Starting SPDK post initialization... 00:10:58.660 SPDK NVMe probe 00:10:58.660 Attaching to 0000:00:10.0 00:10:58.660 Attaching to 0000:00:11.0 00:10:58.660 Attached to 0000:00:10.0 00:10:58.660 Attached to 0000:00:11.0 00:10:58.660 Cleaning up... 00:10:58.660 00:10:58.660 real 0m0.290s 00:10:58.660 user 0m0.101s 00:10:58.660 sys 0m0.089s 00:10:58.660 14:43:28 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.660 ************************************ 00:10:58.660 END TEST env_dpdk_post_init 00:10:58.660 14:43:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:58.660 ************************************ 00:10:58.660 14:43:28 env -- env/env.sh@26 -- # uname 00:10:58.660 14:43:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:58.660 14:43:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:58.660 14:43:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:58.660 14:43:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.660 14:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:10:58.660 ************************************ 00:10:58.660 START TEST env_mem_callbacks 00:10:58.660 ************************************ 00:10:58.660 14:43:28 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:58.660 EAL: Detected CPU lcores: 10 00:10:58.660 EAL: Detected NUMA nodes: 1 00:10:58.660 EAL: Detected shared linkage of DPDK 00:10:58.918 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:58.918 EAL: Selected IOVA mode 'PA' 00:10:58.918 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:58.918 00:10:58.918 00:10:58.918 CUnit - A unit testing framework for C - Version 2.1-3 00:10:58.918 http://cunit.sourceforge.net/ 00:10:58.918 00:10:58.919 00:10:58.919 Suite: memory 00:10:58.919 Test: test ... 00:10:58.919 register 0x200000200000 2097152 00:10:58.919 malloc 3145728 00:10:58.919 register 0x200000400000 4194304 00:10:58.919 buf 0x2000004fffc0 len 3145728 PASSED 00:10:58.919 malloc 64 00:10:58.919 buf 0x2000004ffec0 len 64 PASSED 00:10:58.919 malloc 4194304 00:10:58.919 register 0x200000800000 6291456 00:10:58.919 buf 0x2000009fffc0 len 4194304 PASSED 00:10:58.919 free 0x2000004fffc0 3145728 00:10:58.919 free 0x2000004ffec0 64 00:10:58.919 unregister 0x200000400000 4194304 PASSED 00:10:58.919 free 0x2000009fffc0 4194304 00:10:58.919 unregister 0x200000800000 6291456 PASSED 00:10:58.919 malloc 8388608 00:10:58.919 register 0x200000400000 10485760 00:10:58.919 buf 0x2000005fffc0 len 8388608 PASSED 00:10:58.919 free 0x2000005fffc0 8388608 00:10:58.919 unregister 0x200000400000 10485760 PASSED 00:10:58.919 passed 00:10:58.919 00:10:58.919 Run Summary: Type Total Ran Passed Failed Inactive 00:10:58.919 suites 1 1 n/a 0 0 00:10:58.919 tests 1 1 1 0 0 00:10:58.919 asserts 15 15 15 0 n/a 00:10:58.919 00:10:58.919 Elapsed time = 0.057 seconds 00:10:58.919 00:10:58.919 real 0m0.270s 00:10:58.919 user 0m0.092s 00:10:58.919 sys 0m0.076s 00:10:58.919 14:43:28 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.919 14:43:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:58.919 ************************************ 00:10:58.919 END TEST env_mem_callbacks 00:10:58.919 ************************************ 00:10:58.919 00:10:58.919 real 0m9.933s 00:10:58.919 user 0m7.822s 00:10:58.919 sys 0m1.708s 00:10:58.919 14:43:28 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.919 14:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:10:58.919 ************************************ 00:10:58.919 END TEST env 00:10:58.919 ************************************ 00:10:59.177 14:43:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:59.177 14:43:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.177 14:43:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.177 14:43:28 -- common/autotest_common.sh@10 -- # set +x 00:10:59.177 ************************************ 00:10:59.177 START TEST rpc 00:10:59.177 ************************************ 00:10:59.177 14:43:28 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:59.177 * Looking for test storage... 00:10:59.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:59.177 14:43:28 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:59.177 14:43:28 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:59.177 14:43:28 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:59.177 14:43:29 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.177 14:43:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.177 14:43:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.177 14:43:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.177 14:43:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.177 14:43:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.177 14:43:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:59.177 14:43:29 rpc -- scripts/common.sh@345 -- # : 1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.177 14:43:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.177 14:43:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@353 -- # local d=1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.177 14:43:29 rpc -- scripts/common.sh@355 -- # echo 1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.177 14:43:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@353 -- # local d=2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.177 14:43:29 rpc -- scripts/common.sh@355 -- # echo 2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.177 14:43:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.177 14:43:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.177 14:43:29 rpc -- scripts/common.sh@368 -- # return 0 00:10:59.177 14:43:29 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.177 14:43:29 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.177 --rc genhtml_branch_coverage=1 00:10:59.177 --rc genhtml_function_coverage=1 00:10:59.177 --rc genhtml_legend=1 00:10:59.177 --rc geninfo_all_blocks=1 00:10:59.177 --rc geninfo_unexecuted_blocks=1 00:10:59.177 00:10:59.177 ' 00:10:59.177 14:43:29 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.177 --rc genhtml_branch_coverage=1 00:10:59.177 --rc genhtml_function_coverage=1 00:10:59.177 --rc genhtml_legend=1 00:10:59.177 --rc geninfo_all_blocks=1 00:10:59.177 --rc geninfo_unexecuted_blocks=1 00:10:59.177 00:10:59.177 ' 00:10:59.177 14:43:29 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.177 --rc genhtml_branch_coverage=1 00:10:59.177 --rc genhtml_function_coverage=1 00:10:59.177 --rc genhtml_legend=1 00:10:59.177 --rc geninfo_all_blocks=1 00:10:59.177 --rc geninfo_unexecuted_blocks=1 00:10:59.178 00:10:59.178 ' 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:59.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.178 --rc genhtml_branch_coverage=1 00:10:59.178 --rc genhtml_function_coverage=1 00:10:59.178 --rc genhtml_legend=1 00:10:59.178 --rc geninfo_all_blocks=1 00:10:59.178 --rc geninfo_unexecuted_blocks=1 00:10:59.178 00:10:59.178 ' 00:10:59.178 14:43:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56892 00:10:59.178 14:43:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:59.178 14:43:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56892 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@833 -- # '[' -z 56892 ']' 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.178 14:43:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.178 14:43:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:59.436 [2024-11-04 14:43:29.174081] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:10:59.436 [2024-11-04 14:43:29.174289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56892 ] 00:10:59.695 [2024-11-04 14:43:29.362250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.695 [2024-11-04 14:43:29.519252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:59.695 [2024-11-04 14:43:29.519315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56892' to capture a snapshot of events at runtime. 00:10:59.695 [2024-11-04 14:43:29.519333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.695 [2024-11-04 14:43:29.519349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.695 [2024-11-04 14:43:29.519362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56892 for offline analysis/debug. 00:10:59.695 [2024-11-04 14:43:29.520648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.627 14:43:30 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.627 14:43:30 rpc -- common/autotest_common.sh@866 -- # return 0 00:11:00.627 14:43:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:00.627 14:43:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:00.627 14:43:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:00.627 14:43:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:00.627 14:43:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:00.627 14:43:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.627 14:43:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.627 ************************************ 00:11:00.627 START TEST rpc_integrity 00:11:00.627 ************************************ 00:11:00.627 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:11:00.628 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.628 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.628 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.628 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.628 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:00.628 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:00.886 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:00.886 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:00.886 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.886 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.886 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.886 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:00.886 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:00.886 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:00.887 { 00:11:00.887 "name": "Malloc0", 00:11:00.887 "aliases": [ 00:11:00.887 "9f3dcf79-1dc7-4141-a5f7-9f13c778efa0" 00:11:00.887 ], 00:11:00.887 "product_name": "Malloc disk", 00:11:00.887 "block_size": 512, 00:11:00.887 "num_blocks": 16384, 00:11:00.887 "uuid": "9f3dcf79-1dc7-4141-a5f7-9f13c778efa0", 00:11:00.887 "assigned_rate_limits": { 00:11:00.887 "rw_ios_per_sec": 0, 00:11:00.887 "rw_mbytes_per_sec": 0, 00:11:00.887 "r_mbytes_per_sec": 0, 00:11:00.887 "w_mbytes_per_sec": 0 00:11:00.887 }, 00:11:00.887 "claimed": false, 00:11:00.887 "zoned": false, 00:11:00.887 "supported_io_types": { 00:11:00.887 "read": true, 00:11:00.887 "write": true, 00:11:00.887 "unmap": true, 00:11:00.887 "flush": true, 00:11:00.887 "reset": true, 00:11:00.887 "nvme_admin": false, 00:11:00.887 "nvme_io": false, 00:11:00.887 "nvme_io_md": false, 00:11:00.887 "write_zeroes": true, 00:11:00.887 "zcopy": true, 00:11:00.887 "get_zone_info": false, 00:11:00.887 "zone_management": false, 00:11:00.887 "zone_append": false, 00:11:00.887 "compare": false, 00:11:00.887 "compare_and_write": false, 00:11:00.887 "abort": true, 00:11:00.887 "seek_hole": false, 00:11:00.887 "seek_data": false, 00:11:00.887 "copy": true, 00:11:00.887 "nvme_iov_md": false 00:11:00.887 }, 00:11:00.887 "memory_domains": [ 00:11:00.887 { 00:11:00.887 "dma_device_id": "system", 00:11:00.887 "dma_device_type": 1 00:11:00.887 }, 00:11:00.887 { 00:11:00.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.887 "dma_device_type": 2 00:11:00.887 } 00:11:00.887 ], 00:11:00.887 "driver_specific": {} 00:11:00.887 } 00:11:00.887 ]' 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.887 [2024-11-04 14:43:30.646682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:00.887 [2024-11-04 14:43:30.646763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.887 [2024-11-04 14:43:30.646814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:00.887 [2024-11-04 14:43:30.646842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.887 [2024-11-04 14:43:30.649970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.887 [2024-11-04 14:43:30.650024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:00.887 Passthru0 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:00.887 { 00:11:00.887 "name": "Malloc0", 00:11:00.887 "aliases": [ 00:11:00.887 "9f3dcf79-1dc7-4141-a5f7-9f13c778efa0" 00:11:00.887 ], 00:11:00.887 "product_name": "Malloc disk", 00:11:00.887 "block_size": 512, 00:11:00.887 "num_blocks": 16384, 00:11:00.887 "uuid": "9f3dcf79-1dc7-4141-a5f7-9f13c778efa0", 00:11:00.887 "assigned_rate_limits": { 00:11:00.887 "rw_ios_per_sec": 0, 00:11:00.887 "rw_mbytes_per_sec": 0, 00:11:00.887 "r_mbytes_per_sec": 0, 00:11:00.887 "w_mbytes_per_sec": 0 00:11:00.887 }, 00:11:00.887 "claimed": true, 00:11:00.887 "claim_type": "exclusive_write", 00:11:00.887 "zoned": false, 00:11:00.887 "supported_io_types": { 00:11:00.887 "read": true, 00:11:00.887 "write": true, 00:11:00.887 "unmap": true, 00:11:00.887 "flush": true, 00:11:00.887 "reset": true, 00:11:00.887 "nvme_admin": false, 00:11:00.887 "nvme_io": false, 00:11:00.887 "nvme_io_md": false, 00:11:00.887 "write_zeroes": true, 00:11:00.887 "zcopy": true, 00:11:00.887 "get_zone_info": false, 00:11:00.887 "zone_management": false, 00:11:00.887 "zone_append": false, 00:11:00.887 "compare": false, 00:11:00.887 "compare_and_write": false, 00:11:00.887 "abort": true, 00:11:00.887 "seek_hole": false, 00:11:00.887 "seek_data": false, 00:11:00.887 "copy": true, 00:11:00.887 "nvme_iov_md": false 00:11:00.887 }, 00:11:00.887 "memory_domains": [ 00:11:00.887 { 00:11:00.887 "dma_device_id": "system", 00:11:00.887 "dma_device_type": 1 00:11:00.887 }, 00:11:00.887 { 00:11:00.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.887 "dma_device_type": 2 00:11:00.887 } 00:11:00.887 ], 00:11:00.887 "driver_specific": {} 00:11:00.887 }, 00:11:00.887 { 00:11:00.887 "name": "Passthru0", 00:11:00.887 "aliases": [ 00:11:00.887 "139a7c7f-83c2-5884-89bb-a3ff2d3c8edc" 00:11:00.887 ], 00:11:00.887 "product_name": "passthru", 00:11:00.887 "block_size": 512, 00:11:00.887 "num_blocks": 16384, 00:11:00.887 "uuid": "139a7c7f-83c2-5884-89bb-a3ff2d3c8edc", 00:11:00.887 "assigned_rate_limits": { 00:11:00.887 "rw_ios_per_sec": 0, 00:11:00.887 "rw_mbytes_per_sec": 0, 00:11:00.887 "r_mbytes_per_sec": 0, 00:11:00.887 "w_mbytes_per_sec": 0 00:11:00.887 }, 00:11:00.887 "claimed": false, 00:11:00.887 "zoned": false, 00:11:00.887 "supported_io_types": { 00:11:00.887 "read": true, 00:11:00.887 "write": true, 00:11:00.887 "unmap": true, 00:11:00.887 "flush": true, 00:11:00.887 "reset": true, 00:11:00.887 "nvme_admin": false, 00:11:00.887 "nvme_io": false, 00:11:00.887 "nvme_io_md": false, 00:11:00.887 "write_zeroes": true, 00:11:00.887 "zcopy": true, 00:11:00.887 "get_zone_info": false, 00:11:00.887 "zone_management": false, 00:11:00.887 "zone_append": false, 00:11:00.887 "compare": false, 00:11:00.887 "compare_and_write": false, 00:11:00.887 "abort": true, 00:11:00.887 "seek_hole": false, 00:11:00.887 "seek_data": false, 00:11:00.887 "copy": true, 00:11:00.887 "nvme_iov_md": false 00:11:00.887 }, 00:11:00.887 "memory_domains": [ 00:11:00.887 { 00:11:00.887 "dma_device_id": "system", 00:11:00.887 "dma_device_type": 1 00:11:00.887 }, 00:11:00.887 { 00:11:00.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.887 "dma_device_type": 2 00:11:00.887 } 00:11:00.887 ], 00:11:00.887 "driver_specific": { 00:11:00.887 "passthru": { 00:11:00.887 "name": "Passthru0", 00:11:00.887 "base_bdev_name": "Malloc0" 00:11:00.887 } 00:11:00.887 } 00:11:00.887 } 00:11:00.887 ]' 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.887 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.887 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:01.146 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:01.146 14:43:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:01.146 00:11:01.146 real 0m0.357s 00:11:01.146 user 0m0.216s 00:11:01.146 sys 0m0.048s 00:11:01.146 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.146 14:43:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 ************************************ 00:11:01.146 END TEST rpc_integrity 00:11:01.146 ************************************ 00:11:01.146 14:43:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:01.146 14:43:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:01.146 14:43:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.146 14:43:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 ************************************ 00:11:01.146 START TEST rpc_plugins 00:11:01.146 ************************************ 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:01.146 { 00:11:01.146 "name": "Malloc1", 00:11:01.146 "aliases": [ 00:11:01.146 "a94e685c-fc5b-464a-b7df-a621f83ddc17" 00:11:01.146 ], 00:11:01.146 "product_name": "Malloc disk", 00:11:01.146 "block_size": 4096, 00:11:01.146 "num_blocks": 256, 00:11:01.146 "uuid": "a94e685c-fc5b-464a-b7df-a621f83ddc17", 00:11:01.146 "assigned_rate_limits": { 00:11:01.146 "rw_ios_per_sec": 0, 00:11:01.146 "rw_mbytes_per_sec": 0, 00:11:01.146 "r_mbytes_per_sec": 0, 00:11:01.146 "w_mbytes_per_sec": 0 00:11:01.146 }, 00:11:01.146 "claimed": false, 00:11:01.146 "zoned": false, 00:11:01.146 "supported_io_types": { 00:11:01.146 "read": true, 00:11:01.146 "write": true, 00:11:01.146 "unmap": true, 00:11:01.146 "flush": true, 00:11:01.146 "reset": true, 00:11:01.146 "nvme_admin": false, 00:11:01.146 "nvme_io": false, 00:11:01.146 "nvme_io_md": false, 00:11:01.146 "write_zeroes": true, 00:11:01.146 "zcopy": true, 00:11:01.146 "get_zone_info": false, 00:11:01.146 "zone_management": false, 00:11:01.146 "zone_append": false, 00:11:01.146 "compare": false, 00:11:01.146 "compare_and_write": false, 00:11:01.146 "abort": true, 00:11:01.146 "seek_hole": false, 00:11:01.146 "seek_data": false, 00:11:01.146 "copy": true, 00:11:01.146 "nvme_iov_md": false 00:11:01.146 }, 00:11:01.146 "memory_domains": [ 00:11:01.146 { 00:11:01.146 "dma_device_id": "system", 00:11:01.146 "dma_device_type": 1 00:11:01.146 }, 00:11:01.146 { 00:11:01.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.146 "dma_device_type": 2 00:11:01.146 } 00:11:01.146 ], 00:11:01.146 "driver_specific": {} 00:11:01.146 } 00:11:01.146 ]' 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 14:43:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 14:43:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 14:43:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 14:43:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:01.146 14:43:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:01.405 ************************************ 00:11:01.405 END TEST rpc_plugins 00:11:01.405 ************************************ 00:11:01.405 14:43:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:01.405 00:11:01.405 real 0m0.174s 00:11:01.405 user 0m0.112s 00:11:01.405 sys 0m0.020s 00:11:01.405 14:43:31 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.405 14:43:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:01.405 14:43:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:01.405 14:43:31 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:01.405 14:43:31 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.405 14:43:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.405 ************************************ 00:11:01.405 START TEST rpc_trace_cmd_test 00:11:01.405 ************************************ 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:01.405 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56892", 00:11:01.405 "tpoint_group_mask": "0x8", 00:11:01.405 "iscsi_conn": { 00:11:01.405 "mask": "0x2", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "scsi": { 00:11:01.405 "mask": "0x4", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "bdev": { 00:11:01.405 "mask": "0x8", 00:11:01.405 "tpoint_mask": "0xffffffffffffffff" 00:11:01.405 }, 00:11:01.405 "nvmf_rdma": { 00:11:01.405 "mask": "0x10", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "nvmf_tcp": { 00:11:01.405 "mask": "0x20", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "ftl": { 00:11:01.405 "mask": "0x40", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "blobfs": { 00:11:01.405 "mask": "0x80", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "dsa": { 00:11:01.405 "mask": "0x200", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "thread": { 00:11:01.405 "mask": "0x400", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "nvme_pcie": { 00:11:01.405 "mask": "0x800", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "iaa": { 00:11:01.405 "mask": "0x1000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "nvme_tcp": { 00:11:01.405 "mask": "0x2000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "bdev_nvme": { 00:11:01.405 "mask": "0x4000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "sock": { 00:11:01.405 "mask": "0x8000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "blob": { 00:11:01.405 "mask": "0x10000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "bdev_raid": { 00:11:01.405 "mask": "0x20000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 }, 00:11:01.405 "scheduler": { 00:11:01.405 "mask": "0x40000", 00:11:01.405 "tpoint_mask": "0x0" 00:11:01.405 } 00:11:01.405 }' 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:01.405 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:01.406 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:01.406 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:01.664 ************************************ 00:11:01.664 END TEST rpc_trace_cmd_test 00:11:01.664 ************************************ 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:01.664 00:11:01.664 real 0m0.292s 00:11:01.664 user 0m0.257s 00:11:01.664 sys 0m0.026s 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.664 14:43:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 14:43:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:01.664 14:43:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:01.664 14:43:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:01.664 14:43:31 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:01.664 14:43:31 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.664 14:43:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 ************************************ 00:11:01.664 START TEST rpc_daemon_integrity 00:11:01.664 ************************************ 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.664 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:01.924 { 00:11:01.924 "name": "Malloc2", 00:11:01.924 "aliases": [ 00:11:01.924 "34ccabe7-b338-4699-8ada-bc5e96c7485a" 00:11:01.924 ], 00:11:01.924 "product_name": "Malloc disk", 00:11:01.924 "block_size": 512, 00:11:01.924 "num_blocks": 16384, 00:11:01.924 "uuid": "34ccabe7-b338-4699-8ada-bc5e96c7485a", 00:11:01.924 "assigned_rate_limits": { 00:11:01.924 "rw_ios_per_sec": 0, 00:11:01.924 "rw_mbytes_per_sec": 0, 00:11:01.924 "r_mbytes_per_sec": 0, 00:11:01.924 "w_mbytes_per_sec": 0 00:11:01.924 }, 00:11:01.924 "claimed": false, 00:11:01.924 "zoned": false, 00:11:01.924 "supported_io_types": { 00:11:01.924 "read": true, 00:11:01.924 "write": true, 00:11:01.924 "unmap": true, 00:11:01.924 "flush": true, 00:11:01.924 "reset": true, 00:11:01.924 "nvme_admin": false, 00:11:01.924 "nvme_io": false, 00:11:01.924 "nvme_io_md": false, 00:11:01.924 "write_zeroes": true, 00:11:01.924 "zcopy": true, 00:11:01.924 "get_zone_info": false, 00:11:01.924 "zone_management": false, 00:11:01.924 "zone_append": false, 00:11:01.924 "compare": false, 00:11:01.924 "compare_and_write": false, 00:11:01.924 "abort": true, 00:11:01.924 "seek_hole": false, 00:11:01.924 "seek_data": false, 00:11:01.924 "copy": true, 00:11:01.924 "nvme_iov_md": false 00:11:01.924 }, 00:11:01.924 "memory_domains": [ 00:11:01.924 { 00:11:01.924 "dma_device_id": "system", 00:11:01.924 "dma_device_type": 1 00:11:01.924 }, 00:11:01.924 { 00:11:01.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.924 "dma_device_type": 2 00:11:01.924 } 00:11:01.924 ], 00:11:01.924 "driver_specific": {} 00:11:01.924 } 00:11:01.924 ]' 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 [2024-11-04 14:43:31.608327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:01.924 [2024-11-04 14:43:31.608406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.924 [2024-11-04 14:43:31.608440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:01.924 [2024-11-04 14:43:31.608458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.924 [2024-11-04 14:43:31.611479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.924 [2024-11-04 14:43:31.611655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:01.924 Passthru0 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:01.924 { 00:11:01.924 "name": "Malloc2", 00:11:01.924 "aliases": [ 00:11:01.924 "34ccabe7-b338-4699-8ada-bc5e96c7485a" 00:11:01.924 ], 00:11:01.924 "product_name": "Malloc disk", 00:11:01.924 "block_size": 512, 00:11:01.924 "num_blocks": 16384, 00:11:01.924 "uuid": "34ccabe7-b338-4699-8ada-bc5e96c7485a", 00:11:01.924 "assigned_rate_limits": { 00:11:01.924 "rw_ios_per_sec": 0, 00:11:01.924 "rw_mbytes_per_sec": 0, 00:11:01.924 "r_mbytes_per_sec": 0, 00:11:01.924 "w_mbytes_per_sec": 0 00:11:01.924 }, 00:11:01.924 "claimed": true, 00:11:01.924 "claim_type": "exclusive_write", 00:11:01.924 "zoned": false, 00:11:01.924 "supported_io_types": { 00:11:01.924 "read": true, 00:11:01.924 "write": true, 00:11:01.924 "unmap": true, 00:11:01.924 "flush": true, 00:11:01.924 "reset": true, 00:11:01.924 "nvme_admin": false, 00:11:01.924 "nvme_io": false, 00:11:01.924 "nvme_io_md": false, 00:11:01.924 "write_zeroes": true, 00:11:01.924 "zcopy": true, 00:11:01.924 "get_zone_info": false, 00:11:01.924 "zone_management": false, 00:11:01.924 "zone_append": false, 00:11:01.924 "compare": false, 00:11:01.924 "compare_and_write": false, 00:11:01.924 "abort": true, 00:11:01.924 "seek_hole": false, 00:11:01.924 "seek_data": false, 00:11:01.924 "copy": true, 00:11:01.924 "nvme_iov_md": false 00:11:01.924 }, 00:11:01.924 "memory_domains": [ 00:11:01.924 { 00:11:01.924 "dma_device_id": "system", 00:11:01.924 "dma_device_type": 1 00:11:01.924 }, 00:11:01.924 { 00:11:01.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.924 "dma_device_type": 2 00:11:01.924 } 00:11:01.924 ], 00:11:01.924 "driver_specific": {} 00:11:01.924 }, 00:11:01.924 { 00:11:01.924 "name": "Passthru0", 00:11:01.924 "aliases": [ 00:11:01.924 "d4a9a180-a3ff-53b2-a852-554b135e2b42" 00:11:01.924 ], 00:11:01.924 "product_name": "passthru", 00:11:01.924 "block_size": 512, 00:11:01.924 "num_blocks": 16384, 00:11:01.924 "uuid": "d4a9a180-a3ff-53b2-a852-554b135e2b42", 00:11:01.924 "assigned_rate_limits": { 00:11:01.924 "rw_ios_per_sec": 0, 00:11:01.924 "rw_mbytes_per_sec": 0, 00:11:01.924 "r_mbytes_per_sec": 0, 00:11:01.924 "w_mbytes_per_sec": 0 00:11:01.924 }, 00:11:01.924 "claimed": false, 00:11:01.924 "zoned": false, 00:11:01.924 "supported_io_types": { 00:11:01.924 "read": true, 00:11:01.924 "write": true, 00:11:01.924 "unmap": true, 00:11:01.924 "flush": true, 00:11:01.924 "reset": true, 00:11:01.924 "nvme_admin": false, 00:11:01.924 "nvme_io": false, 00:11:01.924 "nvme_io_md": false, 00:11:01.924 "write_zeroes": true, 00:11:01.924 "zcopy": true, 00:11:01.924 "get_zone_info": false, 00:11:01.924 "zone_management": false, 00:11:01.924 "zone_append": false, 00:11:01.924 "compare": false, 00:11:01.924 "compare_and_write": false, 00:11:01.924 "abort": true, 00:11:01.924 "seek_hole": false, 00:11:01.924 "seek_data": false, 00:11:01.924 "copy": true, 00:11:01.924 "nvme_iov_md": false 00:11:01.924 }, 00:11:01.924 "memory_domains": [ 00:11:01.924 { 00:11:01.924 "dma_device_id": "system", 00:11:01.924 "dma_device_type": 1 00:11:01.924 }, 00:11:01.924 { 00:11:01.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.924 "dma_device_type": 2 00:11:01.924 } 00:11:01.924 ], 00:11:01.924 "driver_specific": { 00:11:01.924 "passthru": { 00:11:01.924 "name": "Passthru0", 00:11:01.924 "base_bdev_name": "Malloc2" 00:11:01.924 } 00:11:01.924 } 00:11:01.924 } 00:11:01.924 ]' 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:01.925 ************************************ 00:11:01.925 END TEST rpc_daemon_integrity 00:11:01.925 ************************************ 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:01.925 00:11:01.925 real 0m0.345s 00:11:01.925 user 0m0.204s 00:11:01.925 sys 0m0.043s 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.925 14:43:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 14:43:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:02.183 14:43:31 rpc -- rpc/rpc.sh@84 -- # killprocess 56892 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@952 -- # '[' -z 56892 ']' 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@956 -- # kill -0 56892 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@957 -- # uname 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56892 00:11:02.183 killing process with pid 56892 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56892' 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@971 -- # kill 56892 00:11:02.183 14:43:31 rpc -- common/autotest_common.sh@976 -- # wait 56892 00:11:04.786 00:11:04.786 real 0m5.299s 00:11:04.786 user 0m5.939s 00:11:04.786 sys 0m0.918s 00:11:04.786 14:43:34 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.786 ************************************ 00:11:04.786 END TEST rpc 00:11:04.786 ************************************ 00:11:04.786 14:43:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.786 14:43:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:04.786 14:43:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:04.786 14:43:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.786 14:43:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.786 ************************************ 00:11:04.786 START TEST skip_rpc 00:11:04.786 ************************************ 00:11:04.786 14:43:34 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:04.786 * Looking for test storage... 00:11:04.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:04.786 14:43:34 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:04.786 14:43:34 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:04.786 14:43:34 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:04.786 14:43:34 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.786 14:43:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.787 14:43:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:04.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.787 --rc genhtml_branch_coverage=1 00:11:04.787 --rc genhtml_function_coverage=1 00:11:04.787 --rc genhtml_legend=1 00:11:04.787 --rc geninfo_all_blocks=1 00:11:04.787 --rc geninfo_unexecuted_blocks=1 00:11:04.787 00:11:04.787 ' 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:04.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.787 --rc genhtml_branch_coverage=1 00:11:04.787 --rc genhtml_function_coverage=1 00:11:04.787 --rc genhtml_legend=1 00:11:04.787 --rc geninfo_all_blocks=1 00:11:04.787 --rc geninfo_unexecuted_blocks=1 00:11:04.787 00:11:04.787 ' 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:04.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.787 --rc genhtml_branch_coverage=1 00:11:04.787 --rc genhtml_function_coverage=1 00:11:04.787 --rc genhtml_legend=1 00:11:04.787 --rc geninfo_all_blocks=1 00:11:04.787 --rc geninfo_unexecuted_blocks=1 00:11:04.787 00:11:04.787 ' 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:04.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.787 --rc genhtml_branch_coverage=1 00:11:04.787 --rc genhtml_function_coverage=1 00:11:04.787 --rc genhtml_legend=1 00:11:04.787 --rc geninfo_all_blocks=1 00:11:04.787 --rc geninfo_unexecuted_blocks=1 00:11:04.787 00:11:04.787 ' 00:11:04.787 14:43:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:04.787 14:43:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:04.787 14:43:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.787 14:43:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 ************************************ 00:11:04.787 START TEST skip_rpc 00:11:04.787 ************************************ 00:11:04.787 14:43:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:11:04.787 14:43:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57121 00:11:04.787 14:43:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:04.787 14:43:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:04.787 14:43:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:04.787 [2024-11-04 14:43:34.533654] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:04.787 [2024-11-04 14:43:34.534048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57121 ] 00:11:05.046 [2024-11-04 14:43:34.714052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.046 [2024-11-04 14:43:34.861529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:10.314 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57121 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57121 ']' 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57121 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57121 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57121' 00:11:10.315 killing process with pid 57121 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57121 00:11:10.315 14:43:39 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57121 00:11:12.209 ************************************ 00:11:12.209 END TEST skip_rpc 00:11:12.209 ************************************ 00:11:12.209 00:11:12.209 real 0m7.341s 00:11:12.209 user 0m6.776s 00:11:12.209 sys 0m0.467s 00:11:12.209 14:43:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.209 14:43:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.209 14:43:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:12.209 14:43:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:12.209 14:43:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.209 14:43:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.209 ************************************ 00:11:12.209 START TEST skip_rpc_with_json 00:11:12.209 ************************************ 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:12.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57225 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57225 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57225 ']' 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:12.209 14:43:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:12.209 [2024-11-04 14:43:41.927650] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:12.209 [2024-11-04 14:43:41.928197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57225 ] 00:11:12.466 [2024-11-04 14:43:42.115296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.466 [2024-11-04 14:43:42.252029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:13.410 [2024-11-04 14:43:43.123940] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:13.410 request: 00:11:13.410 { 00:11:13.410 "trtype": "tcp", 00:11:13.410 "method": "nvmf_get_transports", 00:11:13.410 "req_id": 1 00:11:13.410 } 00:11:13.410 Got JSON-RPC error response 00:11:13.410 response: 00:11:13.410 { 00:11:13.410 "code": -19, 00:11:13.410 "message": "No such device" 00:11:13.410 } 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:13.410 [2024-11-04 14:43:43.136095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.410 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:13.691 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.691 14:43:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:13.691 { 00:11:13.691 "subsystems": [ 00:11:13.691 { 00:11:13.691 "subsystem": "fsdev", 00:11:13.691 "config": [ 00:11:13.691 { 00:11:13.691 "method": "fsdev_set_opts", 00:11:13.691 "params": { 00:11:13.691 "fsdev_io_pool_size": 65535, 00:11:13.691 "fsdev_io_cache_size": 256 00:11:13.691 } 00:11:13.691 } 00:11:13.691 ] 00:11:13.691 }, 00:11:13.691 { 00:11:13.691 "subsystem": "keyring", 00:11:13.691 "config": [] 00:11:13.691 }, 00:11:13.691 { 00:11:13.691 "subsystem": "iobuf", 00:11:13.691 "config": [ 00:11:13.691 { 00:11:13.691 "method": "iobuf_set_options", 00:11:13.691 "params": { 00:11:13.691 "small_pool_count": 8192, 00:11:13.691 "large_pool_count": 1024, 00:11:13.691 "small_bufsize": 8192, 00:11:13.691 "large_bufsize": 135168, 00:11:13.691 "enable_numa": false 00:11:13.691 } 00:11:13.691 } 00:11:13.691 ] 00:11:13.691 }, 00:11:13.691 { 00:11:13.691 "subsystem": "sock", 00:11:13.691 "config": [ 00:11:13.691 { 00:11:13.691 "method": "sock_set_default_impl", 00:11:13.691 "params": { 00:11:13.691 "impl_name": "posix" 00:11:13.691 } 00:11:13.691 }, 00:11:13.691 { 00:11:13.691 "method": "sock_impl_set_options", 00:11:13.691 "params": { 00:11:13.691 "impl_name": "ssl", 00:11:13.691 "recv_buf_size": 4096, 00:11:13.691 "send_buf_size": 4096, 00:11:13.691 "enable_recv_pipe": true, 00:11:13.691 "enable_quickack": false, 00:11:13.691 "enable_placement_id": 0, 00:11:13.691 "enable_zerocopy_send_server": true, 00:11:13.691 "enable_zerocopy_send_client": false, 00:11:13.691 "zerocopy_threshold": 0, 00:11:13.691 "tls_version": 0, 00:11:13.691 "enable_ktls": false 00:11:13.691 } 00:11:13.691 }, 00:11:13.691 { 00:11:13.691 "method": "sock_impl_set_options", 00:11:13.691 "params": { 00:11:13.691 "impl_name": "posix", 00:11:13.691 "recv_buf_size": 2097152, 00:11:13.691 "send_buf_size": 2097152, 00:11:13.691 "enable_recv_pipe": true, 00:11:13.691 "enable_quickack": false, 00:11:13.691 "enable_placement_id": 0, 00:11:13.691 "enable_zerocopy_send_server": true, 00:11:13.691 "enable_zerocopy_send_client": false, 00:11:13.691 "zerocopy_threshold": 0, 00:11:13.691 "tls_version": 0, 00:11:13.692 "enable_ktls": false 00:11:13.692 } 00:11:13.692 } 00:11:13.692 ] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "vmd", 00:11:13.692 "config": [] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "accel", 00:11:13.692 "config": [ 00:11:13.692 { 00:11:13.692 "method": "accel_set_options", 00:11:13.692 "params": { 00:11:13.692 "small_cache_size": 128, 00:11:13.692 "large_cache_size": 16, 00:11:13.692 "task_count": 2048, 00:11:13.692 "sequence_count": 2048, 00:11:13.692 "buf_count": 2048 00:11:13.692 } 00:11:13.692 } 00:11:13.692 ] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "bdev", 00:11:13.692 "config": [ 00:11:13.692 { 00:11:13.692 "method": "bdev_set_options", 00:11:13.692 "params": { 00:11:13.692 "bdev_io_pool_size": 65535, 00:11:13.692 "bdev_io_cache_size": 256, 00:11:13.692 "bdev_auto_examine": true, 00:11:13.692 "iobuf_small_cache_size": 128, 00:11:13.692 "iobuf_large_cache_size": 16 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "bdev_raid_set_options", 00:11:13.692 "params": { 00:11:13.692 "process_window_size_kb": 1024, 00:11:13.692 "process_max_bandwidth_mb_sec": 0 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "bdev_iscsi_set_options", 00:11:13.692 "params": { 00:11:13.692 "timeout_sec": 30 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "bdev_nvme_set_options", 00:11:13.692 "params": { 00:11:13.692 "action_on_timeout": "none", 00:11:13.692 "timeout_us": 0, 00:11:13.692 "timeout_admin_us": 0, 00:11:13.692 "keep_alive_timeout_ms": 10000, 00:11:13.692 "arbitration_burst": 0, 00:11:13.692 "low_priority_weight": 0, 00:11:13.692 "medium_priority_weight": 0, 00:11:13.692 "high_priority_weight": 0, 00:11:13.692 "nvme_adminq_poll_period_us": 10000, 00:11:13.692 "nvme_ioq_poll_period_us": 0, 00:11:13.692 "io_queue_requests": 0, 00:11:13.692 "delay_cmd_submit": true, 00:11:13.692 "transport_retry_count": 4, 00:11:13.692 "bdev_retry_count": 3, 00:11:13.692 "transport_ack_timeout": 0, 00:11:13.692 "ctrlr_loss_timeout_sec": 0, 00:11:13.692 "reconnect_delay_sec": 0, 00:11:13.692 "fast_io_fail_timeout_sec": 0, 00:11:13.692 "disable_auto_failback": false, 00:11:13.692 "generate_uuids": false, 00:11:13.692 "transport_tos": 0, 00:11:13.692 "nvme_error_stat": false, 00:11:13.692 "rdma_srq_size": 0, 00:11:13.692 "io_path_stat": false, 00:11:13.692 "allow_accel_sequence": false, 00:11:13.692 "rdma_max_cq_size": 0, 00:11:13.692 "rdma_cm_event_timeout_ms": 0, 00:11:13.692 "dhchap_digests": [ 00:11:13.692 "sha256", 00:11:13.692 "sha384", 00:11:13.692 "sha512" 00:11:13.692 ], 00:11:13.692 "dhchap_dhgroups": [ 00:11:13.692 "null", 00:11:13.692 "ffdhe2048", 00:11:13.692 "ffdhe3072", 00:11:13.692 "ffdhe4096", 00:11:13.692 "ffdhe6144", 00:11:13.692 "ffdhe8192" 00:11:13.692 ] 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "bdev_nvme_set_hotplug", 00:11:13.692 "params": { 00:11:13.692 "period_us": 100000, 00:11:13.692 "enable": false 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "bdev_wait_for_examine" 00:11:13.692 } 00:11:13.692 ] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "scsi", 00:11:13.692 "config": null 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "scheduler", 00:11:13.692 "config": [ 00:11:13.692 { 00:11:13.692 "method": "framework_set_scheduler", 00:11:13.692 "params": { 00:11:13.692 "name": "static" 00:11:13.692 } 00:11:13.692 } 00:11:13.692 ] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "vhost_scsi", 00:11:13.692 "config": [] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "vhost_blk", 00:11:13.692 "config": [] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "ublk", 00:11:13.692 "config": [] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "nbd", 00:11:13.692 "config": [] 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "subsystem": "nvmf", 00:11:13.692 "config": [ 00:11:13.692 { 00:11:13.692 "method": "nvmf_set_config", 00:11:13.692 "params": { 00:11:13.692 "discovery_filter": "match_any", 00:11:13.692 "admin_cmd_passthru": { 00:11:13.692 "identify_ctrlr": false 00:11:13.692 }, 00:11:13.692 "dhchap_digests": [ 00:11:13.692 "sha256", 00:11:13.692 "sha384", 00:11:13.692 "sha512" 00:11:13.692 ], 00:11:13.692 "dhchap_dhgroups": [ 00:11:13.692 "null", 00:11:13.692 "ffdhe2048", 00:11:13.692 "ffdhe3072", 00:11:13.692 "ffdhe4096", 00:11:13.692 "ffdhe6144", 00:11:13.692 "ffdhe8192" 00:11:13.692 ] 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "nvmf_set_max_subsystems", 00:11:13.692 "params": { 00:11:13.692 "max_subsystems": 1024 00:11:13.692 } 00:11:13.692 }, 00:11:13.692 { 00:11:13.692 "method": "nvmf_set_crdt", 00:11:13.692 "params": { 00:11:13.692 "crdt1": 0, 00:11:13.692 "crdt2": 0, 00:11:13.692 "crdt3": 0 00:11:13.693 } 00:11:13.693 }, 00:11:13.693 { 00:11:13.693 "method": "nvmf_create_transport", 00:11:13.693 "params": { 00:11:13.693 "trtype": "TCP", 00:11:13.693 "max_queue_depth": 128, 00:11:13.693 "max_io_qpairs_per_ctrlr": 127, 00:11:13.693 "in_capsule_data_size": 4096, 00:11:13.693 "max_io_size": 131072, 00:11:13.693 "io_unit_size": 131072, 00:11:13.693 "max_aq_depth": 128, 00:11:13.693 "num_shared_buffers": 511, 00:11:13.693 "buf_cache_size": 4294967295, 00:11:13.693 "dif_insert_or_strip": false, 00:11:13.693 "zcopy": false, 00:11:13.693 "c2h_success": true, 00:11:13.693 "sock_priority": 0, 00:11:13.693 "abort_timeout_sec": 1, 00:11:13.693 "ack_timeout": 0, 00:11:13.693 "data_wr_pool_size": 0 00:11:13.693 } 00:11:13.693 } 00:11:13.693 ] 00:11:13.693 }, 00:11:13.693 { 00:11:13.693 "subsystem": "iscsi", 00:11:13.693 "config": [ 00:11:13.693 { 00:11:13.693 "method": "iscsi_set_options", 00:11:13.693 "params": { 00:11:13.693 "node_base": "iqn.2016-06.io.spdk", 00:11:13.693 "max_sessions": 128, 00:11:13.693 "max_connections_per_session": 2, 00:11:13.693 "max_queue_depth": 64, 00:11:13.693 "default_time2wait": 2, 00:11:13.693 "default_time2retain": 20, 00:11:13.693 "first_burst_length": 8192, 00:11:13.693 "immediate_data": true, 00:11:13.693 "allow_duplicated_isid": false, 00:11:13.693 "error_recovery_level": 0, 00:11:13.693 "nop_timeout": 60, 00:11:13.693 "nop_in_interval": 30, 00:11:13.693 "disable_chap": false, 00:11:13.693 "require_chap": false, 00:11:13.693 "mutual_chap": false, 00:11:13.693 "chap_group": 0, 00:11:13.693 "max_large_datain_per_connection": 64, 00:11:13.693 "max_r2t_per_connection": 4, 00:11:13.693 "pdu_pool_size": 36864, 00:11:13.693 "immediate_data_pool_size": 16384, 00:11:13.693 "data_out_pool_size": 2048 00:11:13.693 } 00:11:13.693 } 00:11:13.693 ] 00:11:13.693 } 00:11:13.693 ] 00:11:13.693 } 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57225 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57225 ']' 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57225 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57225 00:11:13.693 killing process with pid 57225 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57225' 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57225 00:11:13.693 14:43:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57225 00:11:16.231 14:43:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57281 00:11:16.231 14:43:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:16.231 14:43:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57281 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57281 ']' 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57281 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57281 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:21.561 killing process with pid 57281 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57281' 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57281 00:11:21.561 14:43:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57281 00:11:23.474 14:43:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:23.474 14:43:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:23.474 00:11:23.474 real 0m11.190s 00:11:23.474 user 0m10.598s 00:11:23.474 sys 0m1.023s 00:11:23.474 14:43:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.474 14:43:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:23.474 ************************************ 00:11:23.474 END TEST skip_rpc_with_json 00:11:23.474 ************************************ 00:11:23.474 14:43:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:23.474 14:43:53 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:23.474 14:43:53 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.474 14:43:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.474 ************************************ 00:11:23.474 START TEST skip_rpc_with_delay 00:11:23.474 ************************************ 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:23.474 [2024-11-04 14:43:53.211082] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:23.474 00:11:23.474 real 0m0.243s 00:11:23.474 user 0m0.128s 00:11:23.474 sys 0m0.111s 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.474 14:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:23.474 ************************************ 00:11:23.474 END TEST skip_rpc_with_delay 00:11:23.474 ************************************ 00:11:23.474 14:43:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:23.474 14:43:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:23.474 14:43:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:23.474 14:43:53 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:23.474 14:43:53 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.474 14:43:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.474 ************************************ 00:11:23.474 START TEST exit_on_failed_rpc_init 00:11:23.474 ************************************ 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57410 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57410 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57410 ']' 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.474 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.475 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.475 14:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:23.732 [2024-11-04 14:43:53.447457] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:23.733 [2024-11-04 14:43:53.447663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57410 ] 00:11:23.733 [2024-11-04 14:43:53.621439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.991 [2024-11-04 14:43:53.750652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:24.988 14:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:24.988 [2024-11-04 14:43:54.749611] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:24.988 [2024-11-04 14:43:54.749822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57438 ] 00:11:25.246 [2024-11-04 14:43:54.944049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.246 [2024-11-04 14:43:55.114939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.246 [2024-11-04 14:43:55.115101] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:25.246 [2024-11-04 14:43:55.115131] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:25.246 [2024-11-04 14:43:55.115160] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57410 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57410 ']' 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57410 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57410 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:25.812 killing process with pid 57410 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57410' 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57410 00:11:25.812 14:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57410 00:11:28.343 00:11:28.343 real 0m4.391s 00:11:28.343 user 0m5.026s 00:11:28.343 sys 0m0.695s 00:11:28.343 14:43:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.343 14:43:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:28.343 ************************************ 00:11:28.343 END TEST exit_on_failed_rpc_init 00:11:28.343 ************************************ 00:11:28.343 14:43:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:28.343 00:11:28.343 real 0m23.578s 00:11:28.343 user 0m22.706s 00:11:28.343 sys 0m2.519s 00:11:28.343 14:43:57 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.343 14:43:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.343 ************************************ 00:11:28.343 END TEST skip_rpc 00:11:28.343 ************************************ 00:11:28.343 14:43:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:28.343 14:43:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:28.343 14:43:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.343 14:43:57 -- common/autotest_common.sh@10 -- # set +x 00:11:28.343 ************************************ 00:11:28.343 START TEST rpc_client 00:11:28.343 ************************************ 00:11:28.343 14:43:57 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:28.343 * Looking for test storage... 00:11:28.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:28.343 14:43:57 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.343 14:43:57 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.343 14:43:57 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.343 14:43:57 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.343 14:43:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.343 14:43:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.343 --rc genhtml_branch_coverage=1 00:11:28.343 --rc genhtml_function_coverage=1 00:11:28.343 --rc genhtml_legend=1 00:11:28.343 --rc geninfo_all_blocks=1 00:11:28.343 --rc geninfo_unexecuted_blocks=1 00:11:28.343 00:11:28.343 ' 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.343 --rc genhtml_branch_coverage=1 00:11:28.343 --rc genhtml_function_coverage=1 00:11:28.343 --rc genhtml_legend=1 00:11:28.343 --rc geninfo_all_blocks=1 00:11:28.343 --rc geninfo_unexecuted_blocks=1 00:11:28.343 00:11:28.343 ' 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.343 --rc genhtml_branch_coverage=1 00:11:28.343 --rc genhtml_function_coverage=1 00:11:28.343 --rc genhtml_legend=1 00:11:28.343 --rc geninfo_all_blocks=1 00:11:28.343 --rc geninfo_unexecuted_blocks=1 00:11:28.343 00:11:28.343 ' 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.343 --rc genhtml_branch_coverage=1 00:11:28.343 --rc genhtml_function_coverage=1 00:11:28.343 --rc genhtml_legend=1 00:11:28.343 --rc geninfo_all_blocks=1 00:11:28.343 --rc geninfo_unexecuted_blocks=1 00:11:28.343 00:11:28.343 ' 00:11:28.343 14:43:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:28.343 OK 00:11:28.343 14:43:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:28.343 00:11:28.343 real 0m0.240s 00:11:28.343 user 0m0.145s 00:11:28.343 sys 0m0.108s 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.343 14:43:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:28.343 ************************************ 00:11:28.343 END TEST rpc_client 00:11:28.343 ************************************ 00:11:28.343 14:43:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:28.343 14:43:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:28.343 14:43:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.343 14:43:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.343 ************************************ 00:11:28.343 START TEST json_config 00:11:28.343 ************************************ 00:11:28.343 14:43:58 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:28.343 14:43:58 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.343 14:43:58 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.343 14:43:58 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.603 14:43:58 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.603 14:43:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.603 14:43:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.603 14:43:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.603 14:43:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.603 14:43:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.603 14:43:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:28.603 14:43:58 json_config -- scripts/common.sh@345 -- # : 1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.603 14:43:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.603 14:43:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@353 -- # local d=1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.603 14:43:58 json_config -- scripts/common.sh@355 -- # echo 1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.603 14:43:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@353 -- # local d=2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.603 14:43:58 json_config -- scripts/common.sh@355 -- # echo 2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.603 14:43:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.603 14:43:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.603 14:43:58 json_config -- scripts/common.sh@368 -- # return 0 00:11:28.603 14:43:58 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.603 14:43:58 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.603 --rc genhtml_branch_coverage=1 00:11:28.603 --rc genhtml_function_coverage=1 00:11:28.603 --rc genhtml_legend=1 00:11:28.603 --rc geninfo_all_blocks=1 00:11:28.603 --rc geninfo_unexecuted_blocks=1 00:11:28.603 00:11:28.603 ' 00:11:28.603 14:43:58 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.603 --rc genhtml_branch_coverage=1 00:11:28.603 --rc genhtml_function_coverage=1 00:11:28.603 --rc genhtml_legend=1 00:11:28.603 --rc geninfo_all_blocks=1 00:11:28.603 --rc geninfo_unexecuted_blocks=1 00:11:28.603 00:11:28.603 ' 00:11:28.603 14:43:58 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.603 --rc genhtml_branch_coverage=1 00:11:28.603 --rc genhtml_function_coverage=1 00:11:28.603 --rc genhtml_legend=1 00:11:28.603 --rc geninfo_all_blocks=1 00:11:28.603 --rc geninfo_unexecuted_blocks=1 00:11:28.603 00:11:28.603 ' 00:11:28.603 14:43:58 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.603 --rc genhtml_branch_coverage=1 00:11:28.603 --rc genhtml_function_coverage=1 00:11:28.603 --rc genhtml_legend=1 00:11:28.603 --rc geninfo_all_blocks=1 00:11:28.603 --rc geninfo_unexecuted_blocks=1 00:11:28.603 00:11:28.604 ' 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9048918-6b2b-48d9-9a25-8aa126fad89b 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c9048918-6b2b-48d9-9a25-8aa126fad89b 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.604 14:43:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.604 14:43:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.604 14:43:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.604 14:43:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.604 14:43:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.604 14:43:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.604 14:43:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.604 14:43:58 json_config -- paths/export.sh@5 -- # export PATH 00:11:28.604 14:43:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@51 -- # : 0 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.604 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.604 14:43:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:28.604 WARNING: No tests are enabled so not running JSON configuration tests 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:28.604 14:43:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:28.604 00:11:28.604 real 0m0.164s 00:11:28.604 user 0m0.098s 00:11:28.604 sys 0m0.070s 00:11:28.604 14:43:58 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.604 14:43:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:28.604 ************************************ 00:11:28.604 END TEST json_config 00:11:28.604 ************************************ 00:11:28.604 14:43:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:28.604 14:43:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:28.604 14:43:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.604 14:43:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.604 ************************************ 00:11:28.604 START TEST json_config_extra_key 00:11:28.604 ************************************ 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.604 14:43:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.604 --rc genhtml_branch_coverage=1 00:11:28.604 --rc genhtml_function_coverage=1 00:11:28.604 --rc genhtml_legend=1 00:11:28.604 --rc geninfo_all_blocks=1 00:11:28.604 --rc geninfo_unexecuted_blocks=1 00:11:28.604 00:11:28.604 ' 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.604 --rc genhtml_branch_coverage=1 00:11:28.604 --rc genhtml_function_coverage=1 00:11:28.604 --rc genhtml_legend=1 00:11:28.604 --rc geninfo_all_blocks=1 00:11:28.604 --rc geninfo_unexecuted_blocks=1 00:11:28.604 00:11:28.604 ' 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.604 --rc genhtml_branch_coverage=1 00:11:28.604 --rc genhtml_function_coverage=1 00:11:28.604 --rc genhtml_legend=1 00:11:28.604 --rc geninfo_all_blocks=1 00:11:28.604 --rc geninfo_unexecuted_blocks=1 00:11:28.604 00:11:28.604 ' 00:11:28.604 14:43:58 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.604 --rc genhtml_branch_coverage=1 00:11:28.604 --rc genhtml_function_coverage=1 00:11:28.604 --rc genhtml_legend=1 00:11:28.604 --rc geninfo_all_blocks=1 00:11:28.604 --rc geninfo_unexecuted_blocks=1 00:11:28.604 00:11:28.604 ' 00:11:28.604 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9048918-6b2b-48d9-9a25-8aa126fad89b 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c9048918-6b2b-48d9-9a25-8aa126fad89b 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.605 14:43:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.605 14:43:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.605 14:43:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.605 14:43:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.605 14:43:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.605 14:43:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.605 14:43:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.605 14:43:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:28.605 14:43:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.605 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.605 14:43:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:28.605 INFO: launching applications... 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:28.605 14:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57637 00:11:28.605 Waiting for target to run... 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57637 /var/tmp/spdk_tgt.sock 00:11:28.605 14:43:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:28.605 14:43:58 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57637 ']' 00:11:28.605 14:43:58 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:28.605 14:43:58 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:28.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:28.605 14:43:58 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:28.605 14:43:58 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:28.605 14:43:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:28.864 [2024-11-04 14:43:58.606578] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:28.864 [2024-11-04 14:43:58.606834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57637 ] 00:11:29.430 [2024-11-04 14:43:59.070434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.430 [2024-11-04 14:43:59.190538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.996 14:43:59 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.996 14:43:59 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:11:29.996 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:29.996 INFO: shutting down applications... 00:11:29.996 14:43:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:29.996 14:43:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57637 ]] 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57637 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:29.996 14:43:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:30.571 14:44:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:30.571 14:44:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:30.571 14:44:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:30.571 14:44:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:31.160 14:44:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:31.160 14:44:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:31.160 14:44:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:31.160 14:44:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:31.725 14:44:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:31.725 14:44:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:31.725 14:44:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:31.725 14:44:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:32.291 14:44:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:32.291 14:44:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:32.291 14:44:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:32.291 14:44:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:32.549 14:44:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:32.549 14:44:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:32.549 14:44:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:32.549 14:44:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:33.119 14:44:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:33.119 14:44:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:33.119 14:44:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57637 00:11:33.120 14:44:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:33.120 14:44:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:33.120 14:44:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:33.120 SPDK target shutdown done 00:11:33.120 14:44:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:33.120 Success 00:11:33.120 14:44:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:33.120 00:11:33.120 real 0m4.580s 00:11:33.120 user 0m4.013s 00:11:33.120 sys 0m0.637s 00:11:33.120 14:44:02 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.120 ************************************ 00:11:33.120 END TEST json_config_extra_key 00:11:33.120 ************************************ 00:11:33.120 14:44:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:33.120 14:44:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:33.120 14:44:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:33.120 14:44:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.120 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:11:33.120 ************************************ 00:11:33.120 START TEST alias_rpc 00:11:33.120 ************************************ 00:11:33.120 14:44:02 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:33.390 * Looking for test storage... 00:11:33.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.390 14:44:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.390 --rc genhtml_branch_coverage=1 00:11:33.390 --rc genhtml_function_coverage=1 00:11:33.390 --rc genhtml_legend=1 00:11:33.390 --rc geninfo_all_blocks=1 00:11:33.390 --rc geninfo_unexecuted_blocks=1 00:11:33.390 00:11:33.390 ' 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.390 --rc genhtml_branch_coverage=1 00:11:33.390 --rc genhtml_function_coverage=1 00:11:33.390 --rc genhtml_legend=1 00:11:33.390 --rc geninfo_all_blocks=1 00:11:33.390 --rc geninfo_unexecuted_blocks=1 00:11:33.390 00:11:33.390 ' 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.390 --rc genhtml_branch_coverage=1 00:11:33.390 --rc genhtml_function_coverage=1 00:11:33.390 --rc genhtml_legend=1 00:11:33.390 --rc geninfo_all_blocks=1 00:11:33.390 --rc geninfo_unexecuted_blocks=1 00:11:33.390 00:11:33.390 ' 00:11:33.390 14:44:03 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.390 --rc genhtml_branch_coverage=1 00:11:33.390 --rc genhtml_function_coverage=1 00:11:33.391 --rc genhtml_legend=1 00:11:33.391 --rc geninfo_all_blocks=1 00:11:33.391 --rc geninfo_unexecuted_blocks=1 00:11:33.391 00:11:33.391 ' 00:11:33.391 14:44:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:33.391 14:44:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57754 00:11:33.391 14:44:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:33.391 14:44:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57754 00:11:33.391 14:44:03 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57754 ']' 00:11:33.391 14:44:03 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.391 14:44:03 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:33.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.391 14:44:03 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.391 14:44:03 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:33.391 14:44:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.391 [2024-11-04 14:44:03.264550] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:33.391 [2024-11-04 14:44:03.265275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57754 ] 00:11:33.649 [2024-11-04 14:44:03.455596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.908 [2024-11-04 14:44:03.613095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.841 14:44:04 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.841 14:44:04 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:34.841 14:44:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:35.099 14:44:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57754 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57754 ']' 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57754 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57754 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:35.099 killing process with pid 57754 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57754' 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@971 -- # kill 57754 00:11:35.099 14:44:04 alias_rpc -- common/autotest_common.sh@976 -- # wait 57754 00:11:37.628 00:11:37.628 real 0m4.183s 00:11:37.628 user 0m4.388s 00:11:37.628 sys 0m0.640s 00:11:37.628 14:44:07 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.628 ************************************ 00:11:37.628 14:44:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.628 END TEST alias_rpc 00:11:37.628 ************************************ 00:11:37.628 14:44:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:37.628 14:44:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:37.628 14:44:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:37.628 14:44:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.628 14:44:07 -- common/autotest_common.sh@10 -- # set +x 00:11:37.628 ************************************ 00:11:37.628 START TEST spdkcli_tcp 00:11:37.628 ************************************ 00:11:37.628 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:37.628 * Looking for test storage... 00:11:37.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:37.628 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.628 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.628 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.628 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.628 14:44:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.629 14:44:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.629 --rc genhtml_branch_coverage=1 00:11:37.629 --rc genhtml_function_coverage=1 00:11:37.629 --rc genhtml_legend=1 00:11:37.629 --rc geninfo_all_blocks=1 00:11:37.629 --rc geninfo_unexecuted_blocks=1 00:11:37.629 00:11:37.629 ' 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.629 --rc genhtml_branch_coverage=1 00:11:37.629 --rc genhtml_function_coverage=1 00:11:37.629 --rc genhtml_legend=1 00:11:37.629 --rc geninfo_all_blocks=1 00:11:37.629 --rc geninfo_unexecuted_blocks=1 00:11:37.629 00:11:37.629 ' 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.629 --rc genhtml_branch_coverage=1 00:11:37.629 --rc genhtml_function_coverage=1 00:11:37.629 --rc genhtml_legend=1 00:11:37.629 --rc geninfo_all_blocks=1 00:11:37.629 --rc geninfo_unexecuted_blocks=1 00:11:37.629 00:11:37.629 ' 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.629 --rc genhtml_branch_coverage=1 00:11:37.629 --rc genhtml_function_coverage=1 00:11:37.629 --rc genhtml_legend=1 00:11:37.629 --rc geninfo_all_blocks=1 00:11:37.629 --rc geninfo_unexecuted_blocks=1 00:11:37.629 00:11:37.629 ' 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57861 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:37.629 14:44:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57861 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57861 ']' 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.629 14:44:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.629 [2024-11-04 14:44:07.505136] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:37.629 [2024-11-04 14:44:07.505587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57861 ] 00:11:37.887 [2024-11-04 14:44:07.689426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.145 [2024-11-04 14:44:07.829597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.145 [2024-11-04 14:44:07.829615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.087 14:44:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.087 14:44:08 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:11:39.087 14:44:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57878 00:11:39.087 14:44:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:39.087 14:44:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:39.345 [ 00:11:39.345 "bdev_malloc_delete", 00:11:39.345 "bdev_malloc_create", 00:11:39.345 "bdev_null_resize", 00:11:39.345 "bdev_null_delete", 00:11:39.345 "bdev_null_create", 00:11:39.345 "bdev_nvme_cuse_unregister", 00:11:39.345 "bdev_nvme_cuse_register", 00:11:39.345 "bdev_opal_new_user", 00:11:39.345 "bdev_opal_set_lock_state", 00:11:39.345 "bdev_opal_delete", 00:11:39.345 "bdev_opal_get_info", 00:11:39.345 "bdev_opal_create", 00:11:39.345 "bdev_nvme_opal_revert", 00:11:39.345 "bdev_nvme_opal_init", 00:11:39.345 "bdev_nvme_send_cmd", 00:11:39.345 "bdev_nvme_set_keys", 00:11:39.346 "bdev_nvme_get_path_iostat", 00:11:39.346 "bdev_nvme_get_mdns_discovery_info", 00:11:39.346 "bdev_nvme_stop_mdns_discovery", 00:11:39.346 "bdev_nvme_start_mdns_discovery", 00:11:39.346 "bdev_nvme_set_multipath_policy", 00:11:39.346 "bdev_nvme_set_preferred_path", 00:11:39.346 "bdev_nvme_get_io_paths", 00:11:39.346 "bdev_nvme_remove_error_injection", 00:11:39.346 "bdev_nvme_add_error_injection", 00:11:39.346 "bdev_nvme_get_discovery_info", 00:11:39.346 "bdev_nvme_stop_discovery", 00:11:39.346 "bdev_nvme_start_discovery", 00:11:39.346 "bdev_nvme_get_controller_health_info", 00:11:39.346 "bdev_nvme_disable_controller", 00:11:39.346 "bdev_nvme_enable_controller", 00:11:39.346 "bdev_nvme_reset_controller", 00:11:39.346 "bdev_nvme_get_transport_statistics", 00:11:39.346 "bdev_nvme_apply_firmware", 00:11:39.346 "bdev_nvme_detach_controller", 00:11:39.346 "bdev_nvme_get_controllers", 00:11:39.346 "bdev_nvme_attach_controller", 00:11:39.346 "bdev_nvme_set_hotplug", 00:11:39.346 "bdev_nvme_set_options", 00:11:39.346 "bdev_passthru_delete", 00:11:39.346 "bdev_passthru_create", 00:11:39.346 "bdev_lvol_set_parent_bdev", 00:11:39.346 "bdev_lvol_set_parent", 00:11:39.346 "bdev_lvol_check_shallow_copy", 00:11:39.346 "bdev_lvol_start_shallow_copy", 00:11:39.346 "bdev_lvol_grow_lvstore", 00:11:39.346 "bdev_lvol_get_lvols", 00:11:39.346 "bdev_lvol_get_lvstores", 00:11:39.346 "bdev_lvol_delete", 00:11:39.346 "bdev_lvol_set_read_only", 00:11:39.346 "bdev_lvol_resize", 00:11:39.346 "bdev_lvol_decouple_parent", 00:11:39.346 "bdev_lvol_inflate", 00:11:39.346 "bdev_lvol_rename", 00:11:39.346 "bdev_lvol_clone_bdev", 00:11:39.346 "bdev_lvol_clone", 00:11:39.346 "bdev_lvol_snapshot", 00:11:39.346 "bdev_lvol_create", 00:11:39.346 "bdev_lvol_delete_lvstore", 00:11:39.346 "bdev_lvol_rename_lvstore", 00:11:39.346 "bdev_lvol_create_lvstore", 00:11:39.346 "bdev_raid_set_options", 00:11:39.346 "bdev_raid_remove_base_bdev", 00:11:39.346 "bdev_raid_add_base_bdev", 00:11:39.346 "bdev_raid_delete", 00:11:39.346 "bdev_raid_create", 00:11:39.346 "bdev_raid_get_bdevs", 00:11:39.346 "bdev_error_inject_error", 00:11:39.346 "bdev_error_delete", 00:11:39.346 "bdev_error_create", 00:11:39.346 "bdev_split_delete", 00:11:39.346 "bdev_split_create", 00:11:39.346 "bdev_delay_delete", 00:11:39.346 "bdev_delay_create", 00:11:39.346 "bdev_delay_update_latency", 00:11:39.346 "bdev_zone_block_delete", 00:11:39.346 "bdev_zone_block_create", 00:11:39.346 "blobfs_create", 00:11:39.346 "blobfs_detect", 00:11:39.346 "blobfs_set_cache_size", 00:11:39.346 "bdev_aio_delete", 00:11:39.346 "bdev_aio_rescan", 00:11:39.346 "bdev_aio_create", 00:11:39.346 "bdev_ftl_set_property", 00:11:39.346 "bdev_ftl_get_properties", 00:11:39.346 "bdev_ftl_get_stats", 00:11:39.346 "bdev_ftl_unmap", 00:11:39.346 "bdev_ftl_unload", 00:11:39.346 "bdev_ftl_delete", 00:11:39.346 "bdev_ftl_load", 00:11:39.346 "bdev_ftl_create", 00:11:39.346 "bdev_virtio_attach_controller", 00:11:39.346 "bdev_virtio_scsi_get_devices", 00:11:39.346 "bdev_virtio_detach_controller", 00:11:39.346 "bdev_virtio_blk_set_hotplug", 00:11:39.346 "bdev_iscsi_delete", 00:11:39.346 "bdev_iscsi_create", 00:11:39.346 "bdev_iscsi_set_options", 00:11:39.346 "accel_error_inject_error", 00:11:39.346 "ioat_scan_accel_module", 00:11:39.346 "dsa_scan_accel_module", 00:11:39.346 "iaa_scan_accel_module", 00:11:39.346 "keyring_file_remove_key", 00:11:39.346 "keyring_file_add_key", 00:11:39.346 "keyring_linux_set_options", 00:11:39.346 "fsdev_aio_delete", 00:11:39.346 "fsdev_aio_create", 00:11:39.346 "iscsi_get_histogram", 00:11:39.346 "iscsi_enable_histogram", 00:11:39.346 "iscsi_set_options", 00:11:39.346 "iscsi_get_auth_groups", 00:11:39.346 "iscsi_auth_group_remove_secret", 00:11:39.346 "iscsi_auth_group_add_secret", 00:11:39.346 "iscsi_delete_auth_group", 00:11:39.346 "iscsi_create_auth_group", 00:11:39.346 "iscsi_set_discovery_auth", 00:11:39.346 "iscsi_get_options", 00:11:39.346 "iscsi_target_node_request_logout", 00:11:39.346 "iscsi_target_node_set_redirect", 00:11:39.346 "iscsi_target_node_set_auth", 00:11:39.346 "iscsi_target_node_add_lun", 00:11:39.346 "iscsi_get_stats", 00:11:39.346 "iscsi_get_connections", 00:11:39.346 "iscsi_portal_group_set_auth", 00:11:39.346 "iscsi_start_portal_group", 00:11:39.346 "iscsi_delete_portal_group", 00:11:39.346 "iscsi_create_portal_group", 00:11:39.346 "iscsi_get_portal_groups", 00:11:39.346 "iscsi_delete_target_node", 00:11:39.346 "iscsi_target_node_remove_pg_ig_maps", 00:11:39.346 "iscsi_target_node_add_pg_ig_maps", 00:11:39.346 "iscsi_create_target_node", 00:11:39.346 "iscsi_get_target_nodes", 00:11:39.346 "iscsi_delete_initiator_group", 00:11:39.346 "iscsi_initiator_group_remove_initiators", 00:11:39.346 "iscsi_initiator_group_add_initiators", 00:11:39.346 "iscsi_create_initiator_group", 00:11:39.346 "iscsi_get_initiator_groups", 00:11:39.346 "nvmf_set_crdt", 00:11:39.346 "nvmf_set_config", 00:11:39.346 "nvmf_set_max_subsystems", 00:11:39.346 "nvmf_stop_mdns_prr", 00:11:39.346 "nvmf_publish_mdns_prr", 00:11:39.346 "nvmf_subsystem_get_listeners", 00:11:39.346 "nvmf_subsystem_get_qpairs", 00:11:39.346 "nvmf_subsystem_get_controllers", 00:11:39.346 "nvmf_get_stats", 00:11:39.346 "nvmf_get_transports", 00:11:39.346 "nvmf_create_transport", 00:11:39.346 "nvmf_get_targets", 00:11:39.346 "nvmf_delete_target", 00:11:39.346 "nvmf_create_target", 00:11:39.346 "nvmf_subsystem_allow_any_host", 00:11:39.346 "nvmf_subsystem_set_keys", 00:11:39.346 "nvmf_subsystem_remove_host", 00:11:39.346 "nvmf_subsystem_add_host", 00:11:39.346 "nvmf_ns_remove_host", 00:11:39.346 "nvmf_ns_add_host", 00:11:39.346 "nvmf_subsystem_remove_ns", 00:11:39.346 "nvmf_subsystem_set_ns_ana_group", 00:11:39.346 "nvmf_subsystem_add_ns", 00:11:39.346 "nvmf_subsystem_listener_set_ana_state", 00:11:39.346 "nvmf_discovery_get_referrals", 00:11:39.346 "nvmf_discovery_remove_referral", 00:11:39.346 "nvmf_discovery_add_referral", 00:11:39.346 "nvmf_subsystem_remove_listener", 00:11:39.346 "nvmf_subsystem_add_listener", 00:11:39.346 "nvmf_delete_subsystem", 00:11:39.346 "nvmf_create_subsystem", 00:11:39.346 "nvmf_get_subsystems", 00:11:39.346 "env_dpdk_get_mem_stats", 00:11:39.346 "nbd_get_disks", 00:11:39.346 "nbd_stop_disk", 00:11:39.346 "nbd_start_disk", 00:11:39.346 "ublk_recover_disk", 00:11:39.346 "ublk_get_disks", 00:11:39.346 "ublk_stop_disk", 00:11:39.346 "ublk_start_disk", 00:11:39.346 "ublk_destroy_target", 00:11:39.346 "ublk_create_target", 00:11:39.346 "virtio_blk_create_transport", 00:11:39.346 "virtio_blk_get_transports", 00:11:39.346 "vhost_controller_set_coalescing", 00:11:39.346 "vhost_get_controllers", 00:11:39.346 "vhost_delete_controller", 00:11:39.346 "vhost_create_blk_controller", 00:11:39.346 "vhost_scsi_controller_remove_target", 00:11:39.346 "vhost_scsi_controller_add_target", 00:11:39.346 "vhost_start_scsi_controller", 00:11:39.346 "vhost_create_scsi_controller", 00:11:39.346 "thread_set_cpumask", 00:11:39.346 "scheduler_set_options", 00:11:39.346 "framework_get_governor", 00:11:39.346 "framework_get_scheduler", 00:11:39.346 "framework_set_scheduler", 00:11:39.346 "framework_get_reactors", 00:11:39.346 "thread_get_io_channels", 00:11:39.346 "thread_get_pollers", 00:11:39.346 "thread_get_stats", 00:11:39.346 "framework_monitor_context_switch", 00:11:39.346 "spdk_kill_instance", 00:11:39.346 "log_enable_timestamps", 00:11:39.346 "log_get_flags", 00:11:39.346 "log_clear_flag", 00:11:39.346 "log_set_flag", 00:11:39.346 "log_get_level", 00:11:39.346 "log_set_level", 00:11:39.346 "log_get_print_level", 00:11:39.346 "log_set_print_level", 00:11:39.346 "framework_enable_cpumask_locks", 00:11:39.346 "framework_disable_cpumask_locks", 00:11:39.346 "framework_wait_init", 00:11:39.346 "framework_start_init", 00:11:39.346 "scsi_get_devices", 00:11:39.346 "bdev_get_histogram", 00:11:39.346 "bdev_enable_histogram", 00:11:39.346 "bdev_set_qos_limit", 00:11:39.346 "bdev_set_qd_sampling_period", 00:11:39.346 "bdev_get_bdevs", 00:11:39.346 "bdev_reset_iostat", 00:11:39.346 "bdev_get_iostat", 00:11:39.346 "bdev_examine", 00:11:39.346 "bdev_wait_for_examine", 00:11:39.346 "bdev_set_options", 00:11:39.346 "accel_get_stats", 00:11:39.346 "accel_set_options", 00:11:39.346 "accel_set_driver", 00:11:39.346 "accel_crypto_key_destroy", 00:11:39.346 "accel_crypto_keys_get", 00:11:39.346 "accel_crypto_key_create", 00:11:39.346 "accel_assign_opc", 00:11:39.346 "accel_get_module_info", 00:11:39.346 "accel_get_opc_assignments", 00:11:39.346 "vmd_rescan", 00:11:39.346 "vmd_remove_device", 00:11:39.346 "vmd_enable", 00:11:39.346 "sock_get_default_impl", 00:11:39.346 "sock_set_default_impl", 00:11:39.346 "sock_impl_set_options", 00:11:39.346 "sock_impl_get_options", 00:11:39.346 "iobuf_get_stats", 00:11:39.346 "iobuf_set_options", 00:11:39.346 "keyring_get_keys", 00:11:39.346 "framework_get_pci_devices", 00:11:39.346 "framework_get_config", 00:11:39.346 "framework_get_subsystems", 00:11:39.346 "fsdev_set_opts", 00:11:39.346 "fsdev_get_opts", 00:11:39.346 "trace_get_info", 00:11:39.346 "trace_get_tpoint_group_mask", 00:11:39.346 "trace_disable_tpoint_group", 00:11:39.346 "trace_enable_tpoint_group", 00:11:39.346 "trace_clear_tpoint_mask", 00:11:39.346 "trace_set_tpoint_mask", 00:11:39.346 "notify_get_notifications", 00:11:39.347 "notify_get_types", 00:11:39.347 "spdk_get_version", 00:11:39.347 "rpc_get_methods" 00:11:39.347 ] 00:11:39.347 14:44:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.347 14:44:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:39.347 14:44:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57861 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57861 ']' 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57861 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57861 00:11:39.347 killing process with pid 57861 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57861' 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57861 00:11:39.347 14:44:09 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57861 00:11:41.875 ************************************ 00:11:41.875 END TEST spdkcli_tcp 00:11:41.875 ************************************ 00:11:41.875 00:11:41.875 real 0m4.410s 00:11:41.875 user 0m8.051s 00:11:41.875 sys 0m0.708s 00:11:41.875 14:44:11 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.875 14:44:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.875 14:44:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:41.875 14:44:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:41.875 14:44:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.875 14:44:11 -- common/autotest_common.sh@10 -- # set +x 00:11:41.875 ************************************ 00:11:41.875 START TEST dpdk_mem_utility 00:11:41.875 ************************************ 00:11:41.875 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:41.875 * Looking for test storage... 00:11:41.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:41.875 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.875 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.875 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.134 14:44:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.134 --rc genhtml_branch_coverage=1 00:11:42.134 --rc genhtml_function_coverage=1 00:11:42.134 --rc genhtml_legend=1 00:11:42.134 --rc geninfo_all_blocks=1 00:11:42.134 --rc geninfo_unexecuted_blocks=1 00:11:42.134 00:11:42.134 ' 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.134 --rc genhtml_branch_coverage=1 00:11:42.134 --rc genhtml_function_coverage=1 00:11:42.134 --rc genhtml_legend=1 00:11:42.134 --rc geninfo_all_blocks=1 00:11:42.134 --rc geninfo_unexecuted_blocks=1 00:11:42.134 00:11:42.134 ' 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.134 --rc genhtml_branch_coverage=1 00:11:42.134 --rc genhtml_function_coverage=1 00:11:42.134 --rc genhtml_legend=1 00:11:42.134 --rc geninfo_all_blocks=1 00:11:42.134 --rc geninfo_unexecuted_blocks=1 00:11:42.134 00:11:42.134 ' 00:11:42.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.134 --rc genhtml_branch_coverage=1 00:11:42.134 --rc genhtml_function_coverage=1 00:11:42.134 --rc genhtml_legend=1 00:11:42.134 --rc geninfo_all_blocks=1 00:11:42.134 --rc geninfo_unexecuted_blocks=1 00:11:42.134 00:11:42.134 ' 00:11:42.134 14:44:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:42.134 14:44:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57983 00:11:42.134 14:44:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:42.134 14:44:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57983 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57983 ']' 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.134 14:44:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:42.134 [2024-11-04 14:44:11.922851] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:42.134 [2024-11-04 14:44:11.923217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57983 ] 00:11:42.393 [2024-11-04 14:44:12.109908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.651 [2024-11-04 14:44:12.302139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.586 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.586 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:11:43.587 14:44:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:43.587 14:44:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:43.587 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.587 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:43.587 { 00:11:43.587 "filename": "/tmp/spdk_mem_dump.txt" 00:11:43.587 } 00:11:43.587 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.587 14:44:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:43.587 DPDK memory size 816.000000 MiB in 1 heap(s) 00:11:43.587 1 heaps totaling size 816.000000 MiB 00:11:43.587 size: 816.000000 MiB heap id: 0 00:11:43.587 end heaps---------- 00:11:43.587 9 mempools totaling size 595.772034 MiB 00:11:43.587 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:43.587 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:43.587 size: 92.545471 MiB name: bdev_io_57983 00:11:43.587 size: 50.003479 MiB name: msgpool_57983 00:11:43.587 size: 36.509338 MiB name: fsdev_io_57983 00:11:43.587 size: 21.763794 MiB name: PDU_Pool 00:11:43.587 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:43.587 size: 4.133484 MiB name: evtpool_57983 00:11:43.587 size: 0.026123 MiB name: Session_Pool 00:11:43.587 end mempools------- 00:11:43.587 6 memzones totaling size 4.142822 MiB 00:11:43.587 size: 1.000366 MiB name: RG_ring_0_57983 00:11:43.587 size: 1.000366 MiB name: RG_ring_1_57983 00:11:43.587 size: 1.000366 MiB name: RG_ring_4_57983 00:11:43.587 size: 1.000366 MiB name: RG_ring_5_57983 00:11:43.587 size: 0.125366 MiB name: RG_ring_2_57983 00:11:43.587 size: 0.015991 MiB name: RG_ring_3_57983 00:11:43.587 end memzones------- 00:11:43.587 14:44:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:43.587 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:11:43.587 list of free elements. size: 16.792847 MiB 00:11:43.587 element at address: 0x200006400000 with size: 1.995972 MiB 00:11:43.587 element at address: 0x20000a600000 with size: 1.995972 MiB 00:11:43.587 element at address: 0x200003e00000 with size: 1.991028 MiB 00:11:43.587 element at address: 0x200018d00040 with size: 0.999939 MiB 00:11:43.587 element at address: 0x200019100040 with size: 0.999939 MiB 00:11:43.587 element at address: 0x200019200000 with size: 0.999084 MiB 00:11:43.587 element at address: 0x200031e00000 with size: 0.994324 MiB 00:11:43.587 element at address: 0x200000400000 with size: 0.992004 MiB 00:11:43.587 element at address: 0x200018a00000 with size: 0.959656 MiB 00:11:43.587 element at address: 0x200019500040 with size: 0.936401 MiB 00:11:43.587 element at address: 0x200000200000 with size: 0.716980 MiB 00:11:43.587 element at address: 0x20001ac00000 with size: 0.563171 MiB 00:11:43.587 element at address: 0x200000c00000 with size: 0.490173 MiB 00:11:43.587 element at address: 0x200018e00000 with size: 0.487976 MiB 00:11:43.587 element at address: 0x200019600000 with size: 0.485413 MiB 00:11:43.587 element at address: 0x200012c00000 with size: 0.443481 MiB 00:11:43.587 element at address: 0x200028000000 with size: 0.390442 MiB 00:11:43.587 element at address: 0x200000800000 with size: 0.350891 MiB 00:11:43.587 list of standard malloc elements. size: 199.286255 MiB 00:11:43.587 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:11:43.587 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:11:43.587 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:11:43.587 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:11:43.587 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:43.587 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:43.587 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:11:43.587 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:43.587 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:11:43.587 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:11:43.587 element at address: 0x200012bff040 with size: 0.000305 MiB 00:11:43.587 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:11:43.587 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200000cff000 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:11:43.587 element at address: 0x200012bff180 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff280 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff380 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff480 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff580 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff680 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff780 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff880 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bff980 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71880 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71980 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c72080 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012c72180 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:11:43.588 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:11:43.588 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200028063f40 with size: 0.000244 MiB 00:11:43.588 element at address: 0x200028064040 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806af80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b080 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b180 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b280 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b380 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b480 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b580 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b680 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b780 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b880 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806b980 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806be80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c080 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c180 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c280 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c380 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c480 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c580 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c680 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c780 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c880 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806c980 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:11:43.588 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d080 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d180 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d280 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d380 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d480 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d580 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d680 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d780 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d880 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806d980 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806da80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806db80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806de80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806df80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e080 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e180 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e280 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e380 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e480 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e580 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e680 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e780 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e880 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806e980 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f080 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f180 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f280 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f380 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f480 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f580 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f680 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f780 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f880 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806f980 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:11:43.589 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:11:43.589 list of memzone associated elements. size: 599.920898 MiB 00:11:43.589 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:11:43.589 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:43.589 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:11:43.589 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:43.589 element at address: 0x200012df4740 with size: 92.045105 MiB 00:11:43.589 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57983_0 00:11:43.589 element at address: 0x200000dff340 with size: 48.003113 MiB 00:11:43.589 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57983_0 00:11:43.589 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:11:43.589 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57983_0 00:11:43.589 element at address: 0x2000197be900 with size: 20.255615 MiB 00:11:43.589 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:43.589 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:11:43.589 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:43.589 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:11:43.589 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57983_0 00:11:43.589 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:11:43.589 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57983 00:11:43.589 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:43.589 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57983 00:11:43.589 element at address: 0x200018efde00 with size: 1.008179 MiB 00:11:43.589 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:43.589 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:11:43.589 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:43.589 element at address: 0x200018afde00 with size: 1.008179 MiB 00:11:43.589 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:43.589 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:11:43.589 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:43.589 element at address: 0x200000cff100 with size: 1.000549 MiB 00:11:43.589 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57983 00:11:43.589 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:11:43.589 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57983 00:11:43.589 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:11:43.589 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57983 00:11:43.589 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:11:43.589 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57983 00:11:43.589 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:11:43.589 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57983 00:11:43.589 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:11:43.589 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57983 00:11:43.589 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:11:43.589 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:43.589 element at address: 0x200012c72280 with size: 0.500549 MiB 00:11:43.589 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:43.589 element at address: 0x20001967c440 with size: 0.250549 MiB 00:11:43.589 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:43.589 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:11:43.589 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57983 00:11:43.589 element at address: 0x20000085df80 with size: 0.125549 MiB 00:11:43.589 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57983 00:11:43.589 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:11:43.589 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:43.589 element at address: 0x200028064140 with size: 0.023804 MiB 00:11:43.589 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:43.589 element at address: 0x200000859d40 with size: 0.016174 MiB 00:11:43.589 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57983 00:11:43.589 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:11:43.589 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:43.589 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:11:43.589 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57983 00:11:43.589 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:11:43.589 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57983 00:11:43.589 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:11:43.589 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57983 00:11:43.589 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:11:43.589 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:43.589 14:44:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:43.589 14:44:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57983 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57983 ']' 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57983 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57983 00:11:43.589 killing process with pid 57983 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57983' 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57983 00:11:43.589 14:44:13 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57983 00:11:46.133 ************************************ 00:11:46.133 END TEST dpdk_mem_utility 00:11:46.133 ************************************ 00:11:46.133 00:11:46.133 real 0m4.062s 00:11:46.133 user 0m4.121s 00:11:46.133 sys 0m0.649s 00:11:46.133 14:44:15 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.133 14:44:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:46.133 14:44:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:46.133 14:44:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:46.133 14:44:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.133 14:44:15 -- common/autotest_common.sh@10 -- # set +x 00:11:46.133 ************************************ 00:11:46.133 START TEST event 00:11:46.133 ************************************ 00:11:46.133 14:44:15 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:46.133 * Looking for test storage... 00:11:46.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:46.133 14:44:15 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:46.133 14:44:15 event -- common/autotest_common.sh@1691 -- # lcov --version 00:11:46.133 14:44:15 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:46.133 14:44:15 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:46.134 14:44:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.134 14:44:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.134 14:44:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.134 14:44:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.134 14:44:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.134 14:44:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.134 14:44:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.134 14:44:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.134 14:44:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.134 14:44:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.134 14:44:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.134 14:44:15 event -- scripts/common.sh@344 -- # case "$op" in 00:11:46.134 14:44:15 event -- scripts/common.sh@345 -- # : 1 00:11:46.134 14:44:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.134 14:44:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.134 14:44:15 event -- scripts/common.sh@365 -- # decimal 1 00:11:46.134 14:44:15 event -- scripts/common.sh@353 -- # local d=1 00:11:46.134 14:44:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.134 14:44:15 event -- scripts/common.sh@355 -- # echo 1 00:11:46.134 14:44:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.134 14:44:15 event -- scripts/common.sh@366 -- # decimal 2 00:11:46.134 14:44:15 event -- scripts/common.sh@353 -- # local d=2 00:11:46.134 14:44:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.134 14:44:15 event -- scripts/common.sh@355 -- # echo 2 00:11:46.134 14:44:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.134 14:44:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.134 14:44:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.134 14:44:15 event -- scripts/common.sh@368 -- # return 0 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:46.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.134 --rc genhtml_branch_coverage=1 00:11:46.134 --rc genhtml_function_coverage=1 00:11:46.134 --rc genhtml_legend=1 00:11:46.134 --rc geninfo_all_blocks=1 00:11:46.134 --rc geninfo_unexecuted_blocks=1 00:11:46.134 00:11:46.134 ' 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:46.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.134 --rc genhtml_branch_coverage=1 00:11:46.134 --rc genhtml_function_coverage=1 00:11:46.134 --rc genhtml_legend=1 00:11:46.134 --rc geninfo_all_blocks=1 00:11:46.134 --rc geninfo_unexecuted_blocks=1 00:11:46.134 00:11:46.134 ' 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:46.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.134 --rc genhtml_branch_coverage=1 00:11:46.134 --rc genhtml_function_coverage=1 00:11:46.134 --rc genhtml_legend=1 00:11:46.134 --rc geninfo_all_blocks=1 00:11:46.134 --rc geninfo_unexecuted_blocks=1 00:11:46.134 00:11:46.134 ' 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:46.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.134 --rc genhtml_branch_coverage=1 00:11:46.134 --rc genhtml_function_coverage=1 00:11:46.134 --rc genhtml_legend=1 00:11:46.134 --rc geninfo_all_blocks=1 00:11:46.134 --rc geninfo_unexecuted_blocks=1 00:11:46.134 00:11:46.134 ' 00:11:46.134 14:44:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:46.134 14:44:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:46.134 14:44:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:11:46.134 14:44:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.134 14:44:15 event -- common/autotest_common.sh@10 -- # set +x 00:11:46.134 ************************************ 00:11:46.134 START TEST event_perf 00:11:46.134 ************************************ 00:11:46.134 14:44:15 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:46.134 Running I/O for 1 seconds...[2024-11-04 14:44:16.006062] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:46.134 [2024-11-04 14:44:16.006321] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58091 ] 00:11:46.392 [2024-11-04 14:44:16.196413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.650 [2024-11-04 14:44:16.364478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.650 [2024-11-04 14:44:16.364614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.650 [2024-11-04 14:44:16.364696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.650 Running I/O for 1 seconds...[2024-11-04 14:44:16.365046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.027 00:11:48.027 lcore 0: 188512 00:11:48.027 lcore 1: 188513 00:11:48.027 lcore 2: 188515 00:11:48.027 lcore 3: 188517 00:11:48.027 done. 00:11:48.027 00:11:48.027 real 0m1.665s 00:11:48.027 user 0m4.391s 00:11:48.027 sys 0m0.142s 00:11:48.027 14:44:17 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.027 14:44:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:48.027 ************************************ 00:11:48.027 END TEST event_perf 00:11:48.027 ************************************ 00:11:48.027 14:44:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:48.027 14:44:17 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:48.027 14:44:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.027 14:44:17 event -- common/autotest_common.sh@10 -- # set +x 00:11:48.027 ************************************ 00:11:48.027 START TEST event_reactor 00:11:48.027 ************************************ 00:11:48.027 14:44:17 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:48.027 [2024-11-04 14:44:17.716643] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:48.027 [2024-11-04 14:44:17.716815] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58136 ] 00:11:48.027 [2024-11-04 14:44:17.905425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.285 [2024-11-04 14:44:18.066698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.660 test_start 00:11:49.660 oneshot 00:11:49.660 tick 100 00:11:49.660 tick 100 00:11:49.660 tick 250 00:11:49.660 tick 100 00:11:49.660 tick 100 00:11:49.660 tick 100 00:11:49.660 tick 500 00:11:49.660 tick 250 00:11:49.660 tick 100 00:11:49.660 tick 100 00:11:49.660 tick 250 00:11:49.660 tick 100 00:11:49.660 tick 100 00:11:49.660 test_end 00:11:49.660 00:11:49.660 real 0m1.633s 00:11:49.660 user 0m1.419s 00:11:49.660 sys 0m0.104s 00:11:49.660 14:44:19 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.660 ************************************ 00:11:49.660 END TEST event_reactor 00:11:49.660 ************************************ 00:11:49.660 14:44:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:49.660 14:44:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:49.660 14:44:19 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:49.660 14:44:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.660 14:44:19 event -- common/autotest_common.sh@10 -- # set +x 00:11:49.660 ************************************ 00:11:49.660 START TEST event_reactor_perf 00:11:49.660 ************************************ 00:11:49.661 14:44:19 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:49.661 [2024-11-04 14:44:19.400983] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:49.661 [2024-11-04 14:44:19.401148] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:11:49.918 [2024-11-04 14:44:19.590331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.919 [2024-11-04 14:44:19.754570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.293 test_start 00:11:51.293 test_end 00:11:51.293 Performance: 265794 events per second 00:11:51.293 00:11:51.293 real 0m1.634s 00:11:51.293 user 0m1.418s 00:11:51.293 sys 0m0.107s 00:11:51.293 14:44:20 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:51.293 14:44:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:51.293 ************************************ 00:11:51.293 END TEST event_reactor_perf 00:11:51.293 ************************************ 00:11:51.293 14:44:21 event -- event/event.sh@49 -- # uname -s 00:11:51.293 14:44:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:51.293 14:44:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:51.293 14:44:21 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:51.293 14:44:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:51.293 14:44:21 event -- common/autotest_common.sh@10 -- # set +x 00:11:51.293 ************************************ 00:11:51.293 START TEST event_scheduler 00:11:51.293 ************************************ 00:11:51.293 14:44:21 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:51.293 * Looking for test storage... 00:11:51.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:51.293 14:44:21 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:51.293 14:44:21 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:11:51.293 14:44:21 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.552 14:44:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:51.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.552 --rc genhtml_branch_coverage=1 00:11:51.552 --rc genhtml_function_coverage=1 00:11:51.552 --rc genhtml_legend=1 00:11:51.552 --rc geninfo_all_blocks=1 00:11:51.552 --rc geninfo_unexecuted_blocks=1 00:11:51.552 00:11:51.552 ' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:51.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.552 --rc genhtml_branch_coverage=1 00:11:51.552 --rc genhtml_function_coverage=1 00:11:51.552 --rc genhtml_legend=1 00:11:51.552 --rc geninfo_all_blocks=1 00:11:51.552 --rc geninfo_unexecuted_blocks=1 00:11:51.552 00:11:51.552 ' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:51.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.552 --rc genhtml_branch_coverage=1 00:11:51.552 --rc genhtml_function_coverage=1 00:11:51.552 --rc genhtml_legend=1 00:11:51.552 --rc geninfo_all_blocks=1 00:11:51.552 --rc geninfo_unexecuted_blocks=1 00:11:51.552 00:11:51.552 ' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:51.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.552 --rc genhtml_branch_coverage=1 00:11:51.552 --rc genhtml_function_coverage=1 00:11:51.552 --rc genhtml_legend=1 00:11:51.552 --rc geninfo_all_blocks=1 00:11:51.552 --rc geninfo_unexecuted_blocks=1 00:11:51.552 00:11:51.552 ' 00:11:51.552 14:44:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:51.552 14:44:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58249 00:11:51.552 14:44:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:51.552 14:44:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58249 00:11:51.552 14:44:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58249 ']' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.552 14:44:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:51.552 [2024-11-04 14:44:21.327484] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:51.552 [2024-11-04 14:44:21.327654] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58249 ] 00:11:51.811 [2024-11-04 14:44:21.504620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.811 [2024-11-04 14:44:21.646869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.811 [2024-11-04 14:44:21.647032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.811 [2024-11-04 14:44:21.647313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.811 [2024-11-04 14:44:21.648261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:11:52.755 14:44:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:52.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.755 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.755 POWER: Cannot set governor of lcore 0 to performance 00:11:52.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.755 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.755 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.755 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:52.755 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:52.755 POWER: Unable to set Power Management Environment for lcore 0 00:11:52.755 [2024-11-04 14:44:22.385147] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:11:52.755 [2024-11-04 14:44:22.385377] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:11:52.755 [2024-11-04 14:44:22.385570] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:52.755 [2024-11-04 14:44:22.385793] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:52.755 [2024-11-04 14:44:22.385972] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:52.755 [2024-11-04 14:44:22.386061] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.755 14:44:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.755 14:44:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 [2024-11-04 14:44:22.740282] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:53.014 14:44:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.014 14:44:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:53.014 14:44:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:53.014 14:44:22 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 ************************************ 00:11:53.014 START TEST scheduler_create_thread 00:11:53.014 ************************************ 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 2 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 3 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 4 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 5 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 6 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.014 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.014 7 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 8 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 9 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 10 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.015 14:44:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.581 ************************************ 00:11:53.581 END TEST scheduler_create_thread 00:11:53.581 ************************************ 00:11:53.581 14:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.581 00:11:53.581 real 0m0.597s 00:11:53.581 user 0m0.015s 00:11:53.581 sys 0m0.006s 00:11:53.581 14:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.581 14:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.581 14:44:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:53.581 14:44:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58249 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58249 ']' 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58249 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58249 00:11:53.581 killing process with pid 58249 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58249' 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58249 00:11:53.581 14:44:23 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58249 00:11:54.148 [2024-11-04 14:44:23.826791] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:55.108 ************************************ 00:11:55.108 END TEST event_scheduler 00:11:55.108 ************************************ 00:11:55.108 00:11:55.108 real 0m3.864s 00:11:55.108 user 0m7.779s 00:11:55.108 sys 0m0.523s 00:11:55.108 14:44:24 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:55.108 14:44:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:55.108 14:44:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:55.108 14:44:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:55.108 14:44:24 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:55.108 14:44:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.108 14:44:24 event -- common/autotest_common.sh@10 -- # set +x 00:11:55.108 ************************************ 00:11:55.108 START TEST app_repeat 00:11:55.108 ************************************ 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:55.108 Process app_repeat pid: 58338 00:11:55.108 spdk_app_start Round 0 00:11:55.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58338 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58338' 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:55.108 14:44:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58338 ']' 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:55.108 14:44:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:55.367 [2024-11-04 14:44:25.029938] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:11:55.367 [2024-11-04 14:44:25.030094] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58338 ] 00:11:55.367 [2024-11-04 14:44:25.203227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:55.626 [2024-11-04 14:44:25.345816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.626 [2024-11-04 14:44:25.345851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.561 14:44:26 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:56.561 14:44:26 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:56.561 14:44:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:56.561 Malloc0 00:11:56.818 14:44:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:57.076 Malloc1 00:11:57.076 14:44:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.076 14:44:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:57.334 /dev/nbd0 00:11:57.334 14:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:57.334 14:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:57.334 1+0 records in 00:11:57.334 1+0 records out 00:11:57.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370213 s, 11.1 MB/s 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:57.334 14:44:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:57.334 14:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.334 14:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.334 14:44:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:57.592 /dev/nbd1 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:57.592 1+0 records in 00:11:57.592 1+0 records out 00:11:57.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266126 s, 15.4 MB/s 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:57.592 14:44:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.592 14:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:58.157 14:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:58.157 { 00:11:58.157 "nbd_device": "/dev/nbd0", 00:11:58.157 "bdev_name": "Malloc0" 00:11:58.157 }, 00:11:58.157 { 00:11:58.157 "nbd_device": "/dev/nbd1", 00:11:58.157 "bdev_name": "Malloc1" 00:11:58.157 } 00:11:58.157 ]' 00:11:58.157 14:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:58.157 14:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:58.157 { 00:11:58.157 "nbd_device": "/dev/nbd0", 00:11:58.157 "bdev_name": "Malloc0" 00:11:58.157 }, 00:11:58.157 { 00:11:58.157 "nbd_device": "/dev/nbd1", 00:11:58.157 "bdev_name": "Malloc1" 00:11:58.157 } 00:11:58.157 ]' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:58.158 /dev/nbd1' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:58.158 /dev/nbd1' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:58.158 256+0 records in 00:11:58.158 256+0 records out 00:11:58.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638466 s, 164 MB/s 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:58.158 256+0 records in 00:11:58.158 256+0 records out 00:11:58.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326543 s, 32.1 MB/s 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:58.158 256+0 records in 00:11:58.158 256+0 records out 00:11:58.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346468 s, 30.3 MB/s 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.158 14:44:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.415 14:44:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.416 14:44:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.981 14:44:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:59.239 14:44:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:59.239 14:44:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:59.805 14:44:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:01.179 [2024-11-04 14:44:30.679892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:01.179 [2024-11-04 14:44:30.826817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.179 [2024-11-04 14:44:30.826822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.179 [2024-11-04 14:44:31.045325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:01.179 [2024-11-04 14:44:31.045697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:03.082 spdk_app_start Round 1 00:12:03.082 14:44:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:03.082 14:44:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:03.082 14:44:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:12:03.082 14:44:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58338 ']' 00:12:03.082 14:44:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:03.082 14:44:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:03.083 14:44:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:03.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:03.083 14:44:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:03.083 14:44:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:03.083 14:44:32 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:03.083 14:44:32 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:03.083 14:44:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:03.341 Malloc0 00:12:03.341 14:44:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:03.908 Malloc1 00:12:03.908 14:44:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.908 14:44:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:04.167 /dev/nbd0 00:12:04.167 14:44:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.167 14:44:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:04.167 1+0 records in 00:12:04.167 1+0 records out 00:12:04.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868862 s, 4.7 MB/s 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:04.167 14:44:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:04.167 14:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.167 14:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.167 14:44:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:04.448 /dev/nbd1 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:04.448 1+0 records in 00:12:04.448 1+0 records out 00:12:04.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002382 s, 17.2 MB/s 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:04.448 14:44:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.448 14:44:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:05.014 { 00:12:05.014 "nbd_device": "/dev/nbd0", 00:12:05.014 "bdev_name": "Malloc0" 00:12:05.014 }, 00:12:05.014 { 00:12:05.014 "nbd_device": "/dev/nbd1", 00:12:05.014 "bdev_name": "Malloc1" 00:12:05.014 } 00:12:05.014 ]' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:05.014 { 00:12:05.014 "nbd_device": "/dev/nbd0", 00:12:05.014 "bdev_name": "Malloc0" 00:12:05.014 }, 00:12:05.014 { 00:12:05.014 "nbd_device": "/dev/nbd1", 00:12:05.014 "bdev_name": "Malloc1" 00:12:05.014 } 00:12:05.014 ]' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:05.014 /dev/nbd1' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:05.014 /dev/nbd1' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:05.014 256+0 records in 00:12:05.014 256+0 records out 00:12:05.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100487 s, 104 MB/s 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:05.014 256+0 records in 00:12:05.014 256+0 records out 00:12:05.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293345 s, 35.7 MB/s 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:05.014 256+0 records in 00:12:05.014 256+0 records out 00:12:05.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312633 s, 33.5 MB/s 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:05.014 14:44:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.015 14:44:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.274 14:44:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.532 14:44:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:06.098 14:44:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:06.098 14:44:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:06.098 14:44:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:06.098 14:44:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:06.099 14:44:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:06.099 14:44:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:06.357 14:44:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:07.731 [2024-11-04 14:44:37.310065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:07.731 [2024-11-04 14:44:37.444033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.731 [2024-11-04 14:44:37.444034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.989 [2024-11-04 14:44:37.641248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:07.989 [2024-11-04 14:44:37.641365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:09.366 spdk_app_start Round 2 00:12:09.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:09.366 14:44:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:09.366 14:44:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:09.366 14:44:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:12:09.366 14:44:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58338 ']' 00:12:09.366 14:44:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:09.366 14:44:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:09.366 14:44:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:09.366 14:44:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:09.366 14:44:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:09.932 14:44:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:09.932 14:44:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:09.932 14:44:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:10.190 Malloc0 00:12:10.190 14:44:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:10.448 Malloc1 00:12:10.706 14:44:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.706 14:44:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:10.964 /dev/nbd0 00:12:10.964 14:44:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.964 14:44:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:10.964 1+0 records in 00:12:10.964 1+0 records out 00:12:10.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431084 s, 9.5 MB/s 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:10.964 14:44:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:10.964 14:44:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.964 14:44:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.964 14:44:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:11.223 /dev/nbd1 00:12:11.223 14:44:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:11.223 14:44:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:11.223 1+0 records in 00:12:11.223 1+0 records out 00:12:11.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339513 s, 12.1 MB/s 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:11.223 14:44:41 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:12:11.223 14:44:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.223 14:44:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:11.223 14:44:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:11.223 14:44:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.223 14:44:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:11.481 14:44:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:11.481 { 00:12:11.481 "nbd_device": "/dev/nbd0", 00:12:11.481 "bdev_name": "Malloc0" 00:12:11.481 }, 00:12:11.481 { 00:12:11.481 "nbd_device": "/dev/nbd1", 00:12:11.481 "bdev_name": "Malloc1" 00:12:11.481 } 00:12:11.481 ]' 00:12:11.481 14:44:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:11.481 { 00:12:11.481 "nbd_device": "/dev/nbd0", 00:12:11.481 "bdev_name": "Malloc0" 00:12:11.481 }, 00:12:11.481 { 00:12:11.481 "nbd_device": "/dev/nbd1", 00:12:11.481 "bdev_name": "Malloc1" 00:12:11.481 } 00:12:11.481 ]' 00:12:11.481 14:44:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:11.740 /dev/nbd1' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:11.740 /dev/nbd1' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:11.740 256+0 records in 00:12:11.740 256+0 records out 00:12:11.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104491 s, 100 MB/s 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:11.740 256+0 records in 00:12:11.740 256+0 records out 00:12:11.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0361937 s, 29.0 MB/s 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:11.740 256+0 records in 00:12:11.740 256+0 records out 00:12:11.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308268 s, 34.0 MB/s 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.740 14:44:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.999 14:44:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.257 14:44:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:12.824 14:44:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:12.824 14:44:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:13.082 14:44:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:14.459 [2024-11-04 14:44:44.024882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:14.459 [2024-11-04 14:44:44.154652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.459 [2024-11-04 14:44:44.154665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.459 [2024-11-04 14:44:44.348467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:14.459 [2024-11-04 14:44:44.348564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:16.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:16.358 14:44:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:12:16.358 14:44:45 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58338 ']' 00:12:16.358 14:44:45 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:16.358 14:44:45 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:16.358 14:44:45 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:16.358 14:44:45 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:16.358 14:44:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:12:16.617 14:44:46 event.app_repeat -- event/event.sh@39 -- # killprocess 58338 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58338 ']' 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58338 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58338 00:12:16.617 killing process with pid 58338 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58338' 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58338 00:12:16.617 14:44:46 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58338 00:12:17.554 spdk_app_start is called in Round 0. 00:12:17.554 Shutdown signal received, stop current app iteration 00:12:17.554 Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 reinitialization... 00:12:17.554 spdk_app_start is called in Round 1. 00:12:17.554 Shutdown signal received, stop current app iteration 00:12:17.554 Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 reinitialization... 00:12:17.554 spdk_app_start is called in Round 2. 00:12:17.554 Shutdown signal received, stop current app iteration 00:12:17.554 Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 reinitialization... 00:12:17.554 spdk_app_start is called in Round 3. 00:12:17.554 Shutdown signal received, stop current app iteration 00:12:17.554 14:44:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:17.554 14:44:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:17.554 00:12:17.554 real 0m22.313s 00:12:17.554 user 0m49.652s 00:12:17.554 sys 0m3.213s 00:12:17.554 14:44:47 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.554 ************************************ 00:12:17.554 END TEST app_repeat 00:12:17.554 ************************************ 00:12:17.554 14:44:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:17.554 14:44:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:17.554 14:44:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:17.554 14:44:47 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:17.554 14:44:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.554 14:44:47 event -- common/autotest_common.sh@10 -- # set +x 00:12:17.554 ************************************ 00:12:17.554 START TEST cpu_locks 00:12:17.554 ************************************ 00:12:17.554 14:44:47 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:17.554 * Looking for test storage... 00:12:17.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:17.554 14:44:47 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:17.554 14:44:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:12:17.554 14:44:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.813 14:44:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.813 --rc genhtml_branch_coverage=1 00:12:17.813 --rc genhtml_function_coverage=1 00:12:17.813 --rc genhtml_legend=1 00:12:17.813 --rc geninfo_all_blocks=1 00:12:17.813 --rc geninfo_unexecuted_blocks=1 00:12:17.813 00:12:17.813 ' 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.813 --rc genhtml_branch_coverage=1 00:12:17.813 --rc genhtml_function_coverage=1 00:12:17.813 --rc genhtml_legend=1 00:12:17.813 --rc geninfo_all_blocks=1 00:12:17.813 --rc geninfo_unexecuted_blocks=1 00:12:17.813 00:12:17.813 ' 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.813 --rc genhtml_branch_coverage=1 00:12:17.813 --rc genhtml_function_coverage=1 00:12:17.813 --rc genhtml_legend=1 00:12:17.813 --rc geninfo_all_blocks=1 00:12:17.813 --rc geninfo_unexecuted_blocks=1 00:12:17.813 00:12:17.813 ' 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.813 --rc genhtml_branch_coverage=1 00:12:17.813 --rc genhtml_function_coverage=1 00:12:17.813 --rc genhtml_legend=1 00:12:17.813 --rc geninfo_all_blocks=1 00:12:17.813 --rc geninfo_unexecuted_blocks=1 00:12:17.813 00:12:17.813 ' 00:12:17.813 14:44:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:17.813 14:44:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:17.813 14:44:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:17.813 14:44:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:17.813 14:44:47 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.814 14:44:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 ************************************ 00:12:17.814 START TEST default_locks 00:12:17.814 ************************************ 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58817 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58817 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58817 ']' 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:17.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:17.814 14:44:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:17.814 [2024-11-04 14:44:47.655965] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:17.814 [2024-11-04 14:44:47.656154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58817 ] 00:12:18.108 [2024-11-04 14:44:47.847970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.367 [2024-11-04 14:44:48.000205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.302 14:44:48 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.302 14:44:48 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:12:19.302 14:44:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58817 00:12:19.302 14:44:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58817 00:12:19.302 14:44:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58817 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58817 ']' 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58817 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58817 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:19.561 killing process with pid 58817 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58817' 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58817 00:12:19.561 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58817 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58817 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58817 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58817 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58817 ']' 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:22.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:22.100 ERROR: process (pid: 58817) is no longer running 00:12:22.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58817) - No such process 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:22.100 00:12:22.100 real 0m4.073s 00:12:22.100 user 0m4.037s 00:12:22.100 sys 0m0.727s 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:22.100 ************************************ 00:12:22.100 END TEST default_locks 00:12:22.100 ************************************ 00:12:22.100 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:22.100 14:44:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:22.100 14:44:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:22.100 14:44:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:22.100 14:44:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:22.100 ************************************ 00:12:22.100 START TEST default_locks_via_rpc 00:12:22.100 ************************************ 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58890 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58890 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58890 ']' 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:22.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:22.100 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.100 [2024-11-04 14:44:51.776918] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:22.100 [2024-11-04 14:44:51.777114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:12:22.100 [2024-11-04 14:44:51.961718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.358 [2024-11-04 14:44:52.112948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58890 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58890 00:12:23.291 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:23.549 14:44:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58890 00:12:23.549 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58890 ']' 00:12:23.549 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58890 00:12:23.549 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:12:23.549 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:23.549 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58890 00:12:23.806 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:23.806 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:23.806 killing process with pid 58890 00:12:23.806 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58890' 00:12:23.806 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58890 00:12:23.806 14:44:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58890 00:12:26.339 00:12:26.339 real 0m4.075s 00:12:26.339 user 0m4.141s 00:12:26.339 sys 0m0.743s 00:12:26.339 14:44:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.339 14:44:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.339 ************************************ 00:12:26.339 END TEST default_locks_via_rpc 00:12:26.339 ************************************ 00:12:26.339 14:44:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:26.339 14:44:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:26.339 14:44:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.339 14:44:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:26.339 ************************************ 00:12:26.339 START TEST non_locking_app_on_locked_coremask 00:12:26.339 ************************************ 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58964 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58964 /var/tmp/spdk.sock 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58964 ']' 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.339 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:26.339 [2024-11-04 14:44:55.927213] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:26.339 [2024-11-04 14:44:55.928200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:12:26.339 [2024-11-04 14:44:56.124444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.597 [2024-11-04 14:44:56.283198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58991 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58991 /var/tmp/spdk2.sock 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58991 ']' 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.530 14:44:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:27.530 [2024-11-04 14:44:57.276159] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:27.530 [2024-11-04 14:44:57.276359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:12:27.793 [2024-11-04 14:44:57.475046] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:27.793 [2024-11-04 14:44:57.475120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.050 [2024-11-04 14:44:57.742339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.578 14:45:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:30.578 14:45:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:30.578 14:45:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58964 00:12:30.578 14:45:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58964 00:12:30.578 14:45:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58964 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58964 ']' 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58964 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58964 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:31.513 killing process with pid 58964 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58964' 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58964 00:12:31.513 14:45:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58964 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58991 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58991 ']' 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58991 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58991 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:36.814 killing process with pid 58991 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58991' 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58991 00:12:36.814 14:45:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58991 00:12:38.190 00:12:38.190 real 0m12.151s 00:12:38.190 user 0m12.765s 00:12:38.190 sys 0m1.577s 00:12:38.190 14:45:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.190 ************************************ 00:12:38.190 END TEST non_locking_app_on_locked_coremask 00:12:38.190 14:45:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:38.190 ************************************ 00:12:38.190 14:45:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:38.190 14:45:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:38.190 14:45:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.190 14:45:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:38.190 ************************************ 00:12:38.190 START TEST locking_app_on_unlocked_coremask 00:12:38.190 ************************************ 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59142 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59142 /var/tmp/spdk.sock 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59142 ']' 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:38.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:38.190 14:45:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:38.447 [2024-11-04 14:45:08.122181] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:38.447 [2024-11-04 14:45:08.123290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59142 ] 00:12:38.447 [2024-11-04 14:45:08.308693] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:38.447 [2024-11-04 14:45:08.308799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.704 [2024-11-04 14:45:08.470696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59164 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59164 /var/tmp/spdk2.sock 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59164 ']' 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:39.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:39.640 14:45:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:39.640 [2024-11-04 14:45:09.486678] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:39.640 [2024-11-04 14:45:09.486874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59164 ] 00:12:39.898 [2024-11-04 14:45:09.691587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.157 [2024-11-04 14:45:09.955189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.718 14:45:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.718 14:45:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:42.718 14:45:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59164 00:12:42.718 14:45:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59164 00:12:42.718 14:45:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59142 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59142 ']' 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59142 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59142 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.285 killing process with pid 59142 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59142' 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59142 00:12:43.285 14:45:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59142 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59164 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59164 ']' 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59164 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59164 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:48.586 killing process with pid 59164 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59164' 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59164 00:12:48.586 14:45:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59164 00:12:51.138 00:12:51.138 real 0m12.451s 00:12:51.138 user 0m13.012s 00:12:51.138 sys 0m1.580s 00:12:51.138 ************************************ 00:12:51.138 END TEST locking_app_on_unlocked_coremask 00:12:51.138 ************************************ 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.138 14:45:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:51.138 14:45:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:51.138 14:45:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.138 14:45:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:51.138 ************************************ 00:12:51.138 START TEST locking_app_on_locked_coremask 00:12:51.138 ************************************ 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59323 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59323 /var/tmp/spdk.sock 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59323 ']' 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.138 14:45:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.138 [2024-11-04 14:45:20.666416] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:51.138 [2024-11-04 14:45:20.666595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:12:51.138 [2024-11-04 14:45:20.844944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.138 [2024-11-04 14:45:20.979589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59339 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59339 /var/tmp/spdk2.sock 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59339 /var/tmp/spdk2.sock 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59339 /var/tmp/spdk2.sock 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59339 ']' 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.074 14:45:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:52.333 [2024-11-04 14:45:21.987145] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:52.333 [2024-11-04 14:45:21.987389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:12:52.333 [2024-11-04 14:45:22.191817] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59323 has claimed it. 00:12:52.333 [2024-11-04 14:45:22.191902] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:52.899 ERROR: process (pid: 59339) is no longer running 00:12:52.899 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59339) - No such process 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59323 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59323 00:12:52.899 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:53.158 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59323 00:12:53.158 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59323 ']' 00:12:53.158 14:45:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59323 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59323 00:12:53.158 killing process with pid 59323 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59323' 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59323 00:12:53.158 14:45:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59323 00:12:55.693 00:12:55.693 real 0m4.809s 00:12:55.693 user 0m5.132s 00:12:55.693 sys 0m0.910s 00:12:55.693 14:45:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:55.693 14:45:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.693 ************************************ 00:12:55.693 END TEST locking_app_on_locked_coremask 00:12:55.693 ************************************ 00:12:55.693 14:45:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:55.693 14:45:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:55.693 14:45:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.693 14:45:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:55.693 ************************************ 00:12:55.693 START TEST locking_overlapped_coremask 00:12:55.693 ************************************ 00:12:55.693 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:12:55.693 14:45:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59408 00:12:55.693 14:45:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59408 /var/tmp/spdk.sock 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59408 ']' 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:55.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:55.694 14:45:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.694 [2024-11-04 14:45:25.515279] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:55.694 [2024-11-04 14:45:25.515495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:12:55.952 [2024-11-04 14:45:25.709163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.211 [2024-11-04 14:45:25.850587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.211 [2024-11-04 14:45:25.850674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.211 [2024-11-04 14:45:25.850681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59432 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59432 /var/tmp/spdk2.sock 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59432 /var/tmp/spdk2.sock 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59432 /var/tmp/spdk2.sock 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59432 ']' 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.147 14:45:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:57.147 [2024-11-04 14:45:26.924543] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:12:57.147 [2024-11-04 14:45:26.924775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:12:57.405 [2024-11-04 14:45:27.139533] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59408 has claimed it. 00:12:57.405 [2024-11-04 14:45:27.139616] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:57.664 ERROR: process (pid: 59432) is no longer running 00:12:57.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59432) - No such process 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59408 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59408 ']' 00:12:57.664 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59408 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59408 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:57.923 killing process with pid 59408 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59408' 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59408 00:12:57.923 14:45:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59408 00:13:00.452 00:13:00.452 real 0m4.502s 00:13:00.452 user 0m12.131s 00:13:00.452 sys 0m0.791s 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:00.452 ************************************ 00:13:00.452 END TEST locking_overlapped_coremask 00:13:00.452 ************************************ 00:13:00.452 14:45:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:00.452 14:45:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:00.452 14:45:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.452 14:45:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:00.452 ************************************ 00:13:00.452 START TEST locking_overlapped_coremask_via_rpc 00:13:00.452 ************************************ 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59496 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59496 /var/tmp/spdk.sock 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59496 ']' 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:00.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:00.452 14:45:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.452 [2024-11-04 14:45:30.037060] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:00.452 [2024-11-04 14:45:30.037220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59496 ] 00:13:00.452 [2024-11-04 14:45:30.213869] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:00.452 [2024-11-04 14:45:30.213978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.711 [2024-11-04 14:45:30.358900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.711 [2024-11-04 14:45:30.359069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.711 [2024-11-04 14:45:30.359082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59514 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59514 /var/tmp/spdk2.sock 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59514 ']' 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.647 14:45:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.647 [2024-11-04 14:45:31.402313] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:01.647 [2024-11-04 14:45:31.402773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59514 ] 00:13:01.905 [2024-11-04 14:45:31.612500] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:01.905 [2024-11-04 14:45:31.612598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.164 [2024-11-04 14:45:32.000930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.164 [2024-11-04 14:45:32.001039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.164 [2024-11-04 14:45:32.001051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.694 [2024-11-04 14:45:34.235589] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59496 has claimed it. 00:13:04.694 request: 00:13:04.694 { 00:13:04.694 "method": "framework_enable_cpumask_locks", 00:13:04.694 "req_id": 1 00:13:04.694 } 00:13:04.694 Got JSON-RPC error response 00:13:04.694 response: 00:13:04.694 { 00:13:04.694 "code": -32603, 00:13:04.694 "message": "Failed to claim CPU core: 2" 00:13:04.694 } 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59496 /var/tmp/spdk.sock 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59496 ']' 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59514 /var/tmp/spdk2.sock 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59514 ']' 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:04.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.694 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:04.953 00:13:04.953 real 0m4.915s 00:13:04.953 user 0m1.834s 00:13:04.953 sys 0m0.271s 00:13:04.953 ************************************ 00:13:04.953 END TEST locking_overlapped_coremask_via_rpc 00:13:04.953 ************************************ 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.953 14:45:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.211 14:45:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:05.211 14:45:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59496 ]] 00:13:05.211 14:45:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59496 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59496 ']' 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59496 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59496 00:13:05.211 killing process with pid 59496 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59496' 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59496 00:13:05.211 14:45:34 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59496 00:13:07.742 14:45:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59514 ]] 00:13:07.742 14:45:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59514 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59514 ']' 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59514 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59514 00:13:07.742 killing process with pid 59514 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59514' 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59514 00:13:07.742 14:45:37 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59514 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:10.270 Process with pid 59496 is not found 00:13:10.270 Process with pid 59514 is not found 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59496 ]] 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59496 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59496 ']' 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59496 00:13:10.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59496) - No such process 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59496 is not found' 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59514 ]] 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59514 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59514 ']' 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59514 00:13:10.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59514) - No such process 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59514 is not found' 00:13:10.270 14:45:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:10.270 00:13:10.270 real 0m52.416s 00:13:10.270 user 1m30.494s 00:13:10.270 sys 0m8.034s 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.270 ************************************ 00:13:10.270 END TEST cpu_locks 00:13:10.270 ************************************ 00:13:10.270 14:45:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:10.270 ************************************ 00:13:10.270 END TEST event 00:13:10.270 ************************************ 00:13:10.270 00:13:10.270 real 1m24.050s 00:13:10.270 user 2m35.361s 00:13:10.270 sys 0m12.415s 00:13:10.270 14:45:39 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.270 14:45:39 event -- common/autotest_common.sh@10 -- # set +x 00:13:10.270 14:45:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:10.270 14:45:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:10.270 14:45:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.270 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:13:10.270 ************************************ 00:13:10.270 START TEST thread 00:13:10.270 ************************************ 00:13:10.270 14:45:39 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:10.270 * Looking for test storage... 00:13:10.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:10.270 14:45:39 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:10.270 14:45:39 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:13:10.270 14:45:39 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:10.270 14:45:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.270 14:45:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.270 14:45:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.270 14:45:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.270 14:45:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.270 14:45:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.270 14:45:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.270 14:45:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.270 14:45:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.270 14:45:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.270 14:45:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.270 14:45:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:10.270 14:45:40 thread -- scripts/common.sh@345 -- # : 1 00:13:10.270 14:45:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.270 14:45:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.270 14:45:40 thread -- scripts/common.sh@365 -- # decimal 1 00:13:10.270 14:45:40 thread -- scripts/common.sh@353 -- # local d=1 00:13:10.270 14:45:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.270 14:45:40 thread -- scripts/common.sh@355 -- # echo 1 00:13:10.270 14:45:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.270 14:45:40 thread -- scripts/common.sh@366 -- # decimal 2 00:13:10.270 14:45:40 thread -- scripts/common.sh@353 -- # local d=2 00:13:10.270 14:45:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.270 14:45:40 thread -- scripts/common.sh@355 -- # echo 2 00:13:10.270 14:45:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.270 14:45:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.270 14:45:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.270 14:45:40 thread -- scripts/common.sh@368 -- # return 0 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.270 --rc genhtml_branch_coverage=1 00:13:10.270 --rc genhtml_function_coverage=1 00:13:10.270 --rc genhtml_legend=1 00:13:10.270 --rc geninfo_all_blocks=1 00:13:10.270 --rc geninfo_unexecuted_blocks=1 00:13:10.270 00:13:10.270 ' 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.270 --rc genhtml_branch_coverage=1 00:13:10.270 --rc genhtml_function_coverage=1 00:13:10.270 --rc genhtml_legend=1 00:13:10.270 --rc geninfo_all_blocks=1 00:13:10.270 --rc geninfo_unexecuted_blocks=1 00:13:10.270 00:13:10.270 ' 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.270 --rc genhtml_branch_coverage=1 00:13:10.270 --rc genhtml_function_coverage=1 00:13:10.270 --rc genhtml_legend=1 00:13:10.270 --rc geninfo_all_blocks=1 00:13:10.270 --rc geninfo_unexecuted_blocks=1 00:13:10.270 00:13:10.270 ' 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.270 --rc genhtml_branch_coverage=1 00:13:10.270 --rc genhtml_function_coverage=1 00:13:10.270 --rc genhtml_legend=1 00:13:10.270 --rc geninfo_all_blocks=1 00:13:10.270 --rc geninfo_unexecuted_blocks=1 00:13:10.270 00:13:10.270 ' 00:13:10.270 14:45:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.270 14:45:40 thread -- common/autotest_common.sh@10 -- # set +x 00:13:10.270 ************************************ 00:13:10.270 START TEST thread_poller_perf 00:13:10.270 ************************************ 00:13:10.270 14:45:40 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:10.270 [2024-11-04 14:45:40.103417] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:10.270 [2024-11-04 14:45:40.103749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59715 ] 00:13:10.529 [2024-11-04 14:45:40.301055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.787 [2024-11-04 14:45:40.459453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.787 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:12.160 [2024-11-04T14:45:42.052Z] ====================================== 00:13:12.160 [2024-11-04T14:45:42.052Z] busy:2214557593 (cyc) 00:13:12.160 [2024-11-04T14:45:42.052Z] total_run_count: 295000 00:13:12.160 [2024-11-04T14:45:42.052Z] tsc_hz: 2200000000 (cyc) 00:13:12.160 [2024-11-04T14:45:42.052Z] ====================================== 00:13:12.160 [2024-11-04T14:45:42.052Z] poller_cost: 7506 (cyc), 3411 (nsec) 00:13:12.160 00:13:12.160 real 0m1.670s 00:13:12.160 user 0m1.434s 00:13:12.160 sys 0m0.124s 00:13:12.160 14:45:41 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:12.160 14:45:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:12.160 ************************************ 00:13:12.160 END TEST thread_poller_perf 00:13:12.160 ************************************ 00:13:12.160 14:45:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:12.160 14:45:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:13:12.160 14:45:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:12.160 14:45:41 thread -- common/autotest_common.sh@10 -- # set +x 00:13:12.160 ************************************ 00:13:12.160 START TEST thread_poller_perf 00:13:12.160 ************************************ 00:13:12.160 14:45:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:12.160 [2024-11-04 14:45:41.832390] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:12.160 [2024-11-04 14:45:41.832803] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:13:12.160 [2024-11-04 14:45:42.020104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.419 [2024-11-04 14:45:42.175875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.419 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:13.793 [2024-11-04T14:45:43.685Z] ====================================== 00:13:13.793 [2024-11-04T14:45:43.685Z] busy:2205749154 (cyc) 00:13:13.793 [2024-11-04T14:45:43.685Z] total_run_count: 3239000 00:13:13.793 [2024-11-04T14:45:43.685Z] tsc_hz: 2200000000 (cyc) 00:13:13.793 [2024-11-04T14:45:43.685Z] ====================================== 00:13:13.793 [2024-11-04T14:45:43.685Z] poller_cost: 680 (cyc), 309 (nsec) 00:13:13.793 00:13:13.793 real 0m1.614s 00:13:13.793 user 0m1.408s 00:13:13.793 sys 0m0.094s 00:13:13.793 14:45:43 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.793 14:45:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:13.793 ************************************ 00:13:13.793 END TEST thread_poller_perf 00:13:13.793 ************************************ 00:13:13.793 14:45:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:13.793 ************************************ 00:13:13.793 END TEST thread 00:13:13.793 ************************************ 00:13:13.793 00:13:13.793 real 0m3.595s 00:13:13.794 user 0m2.995s 00:13:13.794 sys 0m0.369s 00:13:13.794 14:45:43 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.794 14:45:43 thread -- common/autotest_common.sh@10 -- # set +x 00:13:13.794 14:45:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:13:13.794 14:45:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:13.794 14:45:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:13.794 14:45:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.794 14:45:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.794 ************************************ 00:13:13.794 START TEST app_cmdline 00:13:13.794 ************************************ 00:13:13.794 14:45:43 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:13.794 * Looking for test storage... 00:13:13.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:13.794 14:45:43 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.794 14:45:43 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.794 14:45:43 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.794 14:45:43 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.794 14:45:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.054 14:45:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.054 --rc genhtml_branch_coverage=1 00:13:14.054 --rc genhtml_function_coverage=1 00:13:14.054 --rc genhtml_legend=1 00:13:14.054 --rc geninfo_all_blocks=1 00:13:14.054 --rc geninfo_unexecuted_blocks=1 00:13:14.054 00:13:14.054 ' 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.054 --rc genhtml_branch_coverage=1 00:13:14.054 --rc genhtml_function_coverage=1 00:13:14.054 --rc genhtml_legend=1 00:13:14.054 --rc geninfo_all_blocks=1 00:13:14.054 --rc geninfo_unexecuted_blocks=1 00:13:14.054 00:13:14.054 ' 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.054 --rc genhtml_branch_coverage=1 00:13:14.054 --rc genhtml_function_coverage=1 00:13:14.054 --rc genhtml_legend=1 00:13:14.054 --rc geninfo_all_blocks=1 00:13:14.054 --rc geninfo_unexecuted_blocks=1 00:13:14.054 00:13:14.054 ' 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.054 --rc genhtml_branch_coverage=1 00:13:14.054 --rc genhtml_function_coverage=1 00:13:14.054 --rc genhtml_legend=1 00:13:14.054 --rc geninfo_all_blocks=1 00:13:14.054 --rc geninfo_unexecuted_blocks=1 00:13:14.054 00:13:14.054 ' 00:13:14.054 14:45:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:14.054 14:45:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59840 00:13:14.054 14:45:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:14.054 14:45:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59840 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59840 ']' 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:14.054 14:45:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:14.054 [2024-11-04 14:45:43.877079] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:14.054 [2024-11-04 14:45:43.877619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:13:14.412 [2024-11-04 14:45:44.070486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.412 [2024-11-04 14:45:44.225336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.351 14:45:45 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:15.351 14:45:45 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:13:15.351 14:45:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:15.615 { 00:13:15.616 "version": "SPDK v25.01-pre git sha1 361e7dfef", 00:13:15.616 "fields": { 00:13:15.616 "major": 25, 00:13:15.616 "minor": 1, 00:13:15.616 "patch": 0, 00:13:15.616 "suffix": "-pre", 00:13:15.616 "commit": "361e7dfef" 00:13:15.616 } 00:13:15.616 } 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:15.616 14:45:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:15.616 14:45:45 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:16.185 request: 00:13:16.185 { 00:13:16.185 "method": "env_dpdk_get_mem_stats", 00:13:16.185 "req_id": 1 00:13:16.185 } 00:13:16.185 Got JSON-RPC error response 00:13:16.185 response: 00:13:16.185 { 00:13:16.185 "code": -32601, 00:13:16.185 "message": "Method not found" 00:13:16.185 } 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.185 14:45:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59840 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59840 ']' 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59840 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59840 00:13:16.185 killing process with pid 59840 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59840' 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@971 -- # kill 59840 00:13:16.185 14:45:45 app_cmdline -- common/autotest_common.sh@976 -- # wait 59840 00:13:18.730 00:13:18.730 real 0m4.832s 00:13:18.730 user 0m5.339s 00:13:18.730 sys 0m0.742s 00:13:18.730 14:45:48 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:18.730 ************************************ 00:13:18.730 END TEST app_cmdline 00:13:18.730 14:45:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:18.730 ************************************ 00:13:18.730 14:45:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:18.730 14:45:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:18.730 14:45:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:18.730 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:13:18.730 ************************************ 00:13:18.730 START TEST version 00:13:18.730 ************************************ 00:13:18.730 14:45:48 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:18.730 * Looking for test storage... 00:13:18.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:18.730 14:45:48 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:18.730 14:45:48 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:18.730 14:45:48 version -- common/autotest_common.sh@1691 -- # lcov --version 00:13:18.730 14:45:48 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:18.730 14:45:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.731 14:45:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.731 14:45:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.731 14:45:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.731 14:45:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.731 14:45:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.731 14:45:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.731 14:45:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.731 14:45:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.731 14:45:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.731 14:45:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.731 14:45:48 version -- scripts/common.sh@344 -- # case "$op" in 00:13:18.731 14:45:48 version -- scripts/common.sh@345 -- # : 1 00:13:18.731 14:45:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.731 14:45:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.731 14:45:48 version -- scripts/common.sh@365 -- # decimal 1 00:13:18.731 14:45:48 version -- scripts/common.sh@353 -- # local d=1 00:13:18.731 14:45:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.731 14:45:48 version -- scripts/common.sh@355 -- # echo 1 00:13:18.731 14:45:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.731 14:45:48 version -- scripts/common.sh@366 -- # decimal 2 00:13:18.731 14:45:48 version -- scripts/common.sh@353 -- # local d=2 00:13:18.731 14:45:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.731 14:45:48 version -- scripts/common.sh@355 -- # echo 2 00:13:18.731 14:45:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.731 14:45:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.731 14:45:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.731 14:45:48 version -- scripts/common.sh@368 -- # return 0 00:13:18.731 14:45:48 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.731 14:45:48 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.731 --rc genhtml_branch_coverage=1 00:13:18.731 --rc genhtml_function_coverage=1 00:13:18.731 --rc genhtml_legend=1 00:13:18.731 --rc geninfo_all_blocks=1 00:13:18.731 --rc geninfo_unexecuted_blocks=1 00:13:18.731 00:13:18.731 ' 00:13:18.731 14:45:48 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.731 --rc genhtml_branch_coverage=1 00:13:18.731 --rc genhtml_function_coverage=1 00:13:18.731 --rc genhtml_legend=1 00:13:18.731 --rc geninfo_all_blocks=1 00:13:18.731 --rc geninfo_unexecuted_blocks=1 00:13:18.731 00:13:18.731 ' 00:13:18.731 14:45:48 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.731 --rc genhtml_branch_coverage=1 00:13:18.731 --rc genhtml_function_coverage=1 00:13:18.731 --rc genhtml_legend=1 00:13:18.731 --rc geninfo_all_blocks=1 00:13:18.731 --rc geninfo_unexecuted_blocks=1 00:13:18.731 00:13:18.731 ' 00:13:18.731 14:45:48 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.731 --rc genhtml_branch_coverage=1 00:13:18.731 --rc genhtml_function_coverage=1 00:13:18.731 --rc genhtml_legend=1 00:13:18.731 --rc geninfo_all_blocks=1 00:13:18.731 --rc geninfo_unexecuted_blocks=1 00:13:18.731 00:13:18.731 ' 00:13:18.731 14:45:48 version -- app/version.sh@17 -- # get_header_version major 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # cut -f2 00:13:18.731 14:45:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:18.731 14:45:48 version -- app/version.sh@17 -- # major=25 00:13:18.731 14:45:48 version -- app/version.sh@18 -- # get_header_version minor 00:13:18.731 14:45:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # cut -f2 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:18.731 14:45:48 version -- app/version.sh@18 -- # minor=1 00:13:18.731 14:45:48 version -- app/version.sh@19 -- # get_header_version patch 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # cut -f2 00:13:18.731 14:45:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:18.731 14:45:48 version -- app/version.sh@19 -- # patch=0 00:13:18.731 14:45:48 version -- app/version.sh@20 -- # get_header_version suffix 00:13:18.731 14:45:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # cut -f2 00:13:18.731 14:45:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:18.731 14:45:48 version -- app/version.sh@20 -- # suffix=-pre 00:13:18.731 14:45:48 version -- app/version.sh@22 -- # version=25.1 00:13:18.731 14:45:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:18.731 14:45:48 version -- app/version.sh@28 -- # version=25.1rc0 00:13:18.731 14:45:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:18.731 14:45:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:18.991 14:45:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:13:18.991 14:45:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:13:18.991 00:13:18.991 real 0m0.250s 00:13:18.991 user 0m0.158s 00:13:18.991 sys 0m0.119s 00:13:18.991 14:45:48 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:18.991 ************************************ 00:13:18.991 END TEST version 00:13:18.991 ************************************ 00:13:18.991 14:45:48 version -- common/autotest_common.sh@10 -- # set +x 00:13:18.991 14:45:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:13:18.991 14:45:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:13:18.991 14:45:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:18.991 14:45:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:18.991 14:45:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:18.991 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:13:18.991 ************************************ 00:13:18.991 START TEST bdev_raid 00:13:18.991 ************************************ 00:13:18.991 14:45:48 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:18.991 * Looking for test storage... 00:13:18.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:18.991 14:45:48 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:18.991 14:45:48 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:18.991 14:45:48 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:18.991 14:45:48 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:18.991 14:45:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.991 14:45:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.992 14:45:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:13:18.992 14:45:48 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.992 14:45:48 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.992 --rc genhtml_branch_coverage=1 00:13:18.992 --rc genhtml_function_coverage=1 00:13:18.992 --rc genhtml_legend=1 00:13:18.992 --rc geninfo_all_blocks=1 00:13:18.992 --rc geninfo_unexecuted_blocks=1 00:13:18.992 00:13:18.992 ' 00:13:18.992 14:45:48 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.992 --rc genhtml_branch_coverage=1 00:13:18.992 --rc genhtml_function_coverage=1 00:13:18.992 --rc genhtml_legend=1 00:13:18.992 --rc geninfo_all_blocks=1 00:13:18.992 --rc geninfo_unexecuted_blocks=1 00:13:18.992 00:13:18.992 ' 00:13:18.992 14:45:48 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.992 --rc genhtml_branch_coverage=1 00:13:18.992 --rc genhtml_function_coverage=1 00:13:18.992 --rc genhtml_legend=1 00:13:18.992 --rc geninfo_all_blocks=1 00:13:18.992 --rc geninfo_unexecuted_blocks=1 00:13:18.992 00:13:18.992 ' 00:13:18.992 14:45:48 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.992 --rc genhtml_branch_coverage=1 00:13:18.992 --rc genhtml_function_coverage=1 00:13:18.992 --rc genhtml_legend=1 00:13:18.992 --rc geninfo_all_blocks=1 00:13:18.992 --rc geninfo_unexecuted_blocks=1 00:13:18.992 00:13:18.992 ' 00:13:18.992 14:45:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:18.992 14:45:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:18.992 14:45:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:13:19.251 14:45:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:13:19.251 14:45:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:13:19.251 14:45:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:13:19.251 14:45:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:13:19.251 14:45:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:19.251 14:45:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.251 14:45:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.251 ************************************ 00:13:19.251 START TEST raid1_resize_data_offset_test 00:13:19.251 ************************************ 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60035 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60035' 00:13:19.251 Process raid pid: 60035 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60035 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60035 ']' 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.251 14:45:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.251 [2024-11-04 14:45:48.992744] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:19.251 [2024-11-04 14:45:48.992906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.510 [2024-11-04 14:45:49.175945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.510 [2024-11-04 14:45:49.344696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.769 [2024-11-04 14:45:49.592142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.769 [2024-11-04 14:45:49.592211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.334 malloc0 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.334 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 malloc1 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 null0 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 [2024-11-04 14:45:50.253095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:13:20.626 [2024-11-04 14:45:50.256058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:20.626 [2024-11-04 14:45:50.256126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:13:20.626 [2024-11-04 14:45:50.256389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.626 [2024-11-04 14:45:50.256419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:13:20.626 [2024-11-04 14:45:50.256766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:20.626 [2024-11-04 14:45:50.257054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.626 [2024-11-04 14:45:50.257085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:20.626 [2024-11-04 14:45:50.257375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.626 [2024-11-04 14:45:50.325376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.626 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.194 malloc2 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.194 [2024-11-04 14:45:50.952426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:21.194 [2024-11-04 14:45:50.971311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.194 [2024-11-04 14:45:50.974139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.194 14:45:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60035 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60035 ']' 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60035 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60035 00:13:21.194 killing process with pid 60035 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60035' 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60035 00:13:21.194 14:45:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60035 00:13:21.194 [2024-11-04 14:45:51.064773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.194 [2024-11-04 14:45:51.066291] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:13:21.194 [2024-11-04 14:45:51.066403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.194 [2024-11-04 14:45:51.066432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:13:21.454 [2024-11-04 14:45:51.102595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.454 [2024-11-04 14:45:51.103106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.454 [2024-11-04 14:45:51.103139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:23.361 [2024-11-04 14:45:52.895902] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.308 ************************************ 00:13:24.308 END TEST raid1_resize_data_offset_test 00:13:24.308 ************************************ 00:13:24.308 14:45:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:13:24.308 00:13:24.308 real 0m5.105s 00:13:24.308 user 0m4.955s 00:13:24.308 sys 0m0.810s 00:13:24.308 14:45:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:24.309 14:45:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 14:45:54 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:13:24.309 14:45:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:24.309 14:45:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.309 14:45:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 ************************************ 00:13:24.309 START TEST raid0_resize_superblock_test 00:13:24.309 ************************************ 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60124 00:13:24.309 Process raid pid: 60124 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60124' 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60124 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60124 ']' 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.309 14:45:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 [2024-11-04 14:45:54.170427] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:24.309 [2024-11-04 14:45:54.170622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.567 [2024-11-04 14:45:54.361942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.826 [2024-11-04 14:45:54.541024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.084 [2024-11-04 14:45:54.791987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.084 [2024-11-04 14:45:54.792049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.347 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:25.347 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:25.347 14:45:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:13:25.347 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.347 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.925 malloc0 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.925 [2024-11-04 14:45:55.808157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:25.925 [2024-11-04 14:45:55.808274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.925 [2024-11-04 14:45:55.808333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:25.925 [2024-11-04 14:45:55.808366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.925 [2024-11-04 14:45:55.811721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.925 [2024-11-04 14:45:55.811764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:25.925 pt0 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.925 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 4330bad9-b401-4a88-925a-cd26831e8431 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 5e0a4771-4274-47ba-a073-8431572ab3bf 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.183 14:45:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 745264db-b7dd-4eec-a8ac-02b282873ec6 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 [2024-11-04 14:45:56.006219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5e0a4771-4274-47ba-a073-8431572ab3bf is claimed 00:13:26.183 [2024-11-04 14:45:56.006420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 745264db-b7dd-4eec-a8ac-02b282873ec6 is claimed 00:13:26.183 [2024-11-04 14:45:56.006624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:26.183 [2024-11-04 14:45:56.006652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:13:26.183 [2024-11-04 14:45:56.007088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:26.183 [2024-11-04 14:45:56.007417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:26.183 [2024-11-04 14:45:56.007442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:26.183 [2024-11-04 14:45:56.007631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:13:26.183 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.184 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.442 [2024-11-04 14:45:56.130611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.442 [2024-11-04 14:45:56.178661] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:26.442 [2024-11-04 14:45:56.178709] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5e0a4771-4274-47ba-a073-8431572ab3bf' was resized: old size 131072, new size 204800 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.442 [2024-11-04 14:45:56.186432] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:26.442 [2024-11-04 14:45:56.186478] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '745264db-b7dd-4eec-a8ac-02b282873ec6' was resized: old size 131072, new size 204800 00:13:26.442 [2024-11-04 14:45:56.186524] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.442 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.443 [2024-11-04 14:45:56.294653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.443 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.702 [2024-11-04 14:45:56.346460] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:26.702 [2024-11-04 14:45:56.346581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:26.702 [2024-11-04 14:45:56.346603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.702 [2024-11-04 14:45:56.346631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:26.702 [2024-11-04 14:45:56.346804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.702 [2024-11-04 14:45:56.346864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.702 [2024-11-04 14:45:56.346886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.702 [2024-11-04 14:45:56.354298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:26.702 [2024-11-04 14:45:56.354361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.702 [2024-11-04 14:45:56.354394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:26.702 [2024-11-04 14:45:56.354420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.702 [2024-11-04 14:45:56.357563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.702 [2024-11-04 14:45:56.357607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:26.702 pt0 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.702 [2024-11-04 14:45:56.359948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5e0a4771-4274-47ba-a073-8431572ab3bf 00:13:26.702 [2024-11-04 14:45:56.360046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5e0a4771-4274-47ba-a073-8431572ab3bf is claimed 00:13:26.702 [2024-11-04 14:45:56.360202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 745264db-b7dd-4eec-a8ac-02b282873ec6 00:13:26.702 [2024-11-04 14:45:56.360260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 745264db-b7dd-4eec-a8ac-02b282873ec6 is claimed 00:13:26.702 [2024-11-04 14:45:56.360423] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 745264db-b7dd-4eec-a8ac-02b282873ec6 (2) smaller than existing raid bdev Raid (3) 00:13:26.702 [2024-11-04 14:45:56.360470] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5e0a4771-4274-47ba-a073-8431572ab3bf: File exists 00:13:26.702 [2024-11-04 14:45:56.360522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:26.702 [2024-11-04 14:45:56.360541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:13:26.702 [2024-11-04 14:45:56.360865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:26.702 [2024-11-04 14:45:56.361067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:26.702 [2024-11-04 14:45:56.361090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:26.702 [2024-11-04 14:45:56.361305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.702 [2024-11-04 14:45:56.374650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60124 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60124 ']' 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60124 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60124 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:26.702 killing process with pid 60124 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60124' 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60124 00:13:26.702 [2024-11-04 14:45:56.446485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.702 14:45:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60124 00:13:26.702 [2024-11-04 14:45:56.446598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.702 [2024-11-04 14:45:56.446663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.702 [2024-11-04 14:45:56.446678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:28.078 [2024-11-04 14:45:57.901584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.466 14:45:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:13:29.466 00:13:29.466 real 0m5.052s 00:13:29.466 user 0m5.233s 00:13:29.466 sys 0m0.817s 00:13:29.466 14:45:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:29.466 14:45:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.466 ************************************ 00:13:29.466 END TEST raid0_resize_superblock_test 00:13:29.466 ************************************ 00:13:29.466 14:45:59 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:13:29.466 14:45:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:29.466 14:45:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:29.466 14:45:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.466 ************************************ 00:13:29.466 START TEST raid1_resize_superblock_test 00:13:29.466 ************************************ 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60227 00:13:29.466 Process raid pid: 60227 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60227' 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60227 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60227 ']' 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.466 14:45:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.466 [2024-11-04 14:45:59.270851] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:29.466 [2024-11-04 14:45:59.271106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.724 [2024-11-04 14:45:59.452290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.725 [2024-11-04 14:45:59.587137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.982 [2024-11-04 14:45:59.814192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.982 [2024-11-04 14:45:59.814260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.549 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:30.549 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:30.549 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:13:30.549 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.549 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 malloc0 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 [2024-11-04 14:46:00.855192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:31.115 [2024-11-04 14:46:00.855296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.115 [2024-11-04 14:46:00.855330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:31.115 [2024-11-04 14:46:00.855350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.115 [2024-11-04 14:46:00.858239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.115 [2024-11-04 14:46:00.858278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:31.115 pt0 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 b1f82a6f-17cc-4374-a277-505ef4d78a3d 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 b8df4477-ab85-483a-9509-1abbc1cb654f 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.115 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 14432f13-5b96-4975-86c8-a4a34a925d09 00:13:31.116 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.116 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:13:31.116 14:46:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:13:31.116 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.116 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.116 [2024-11-04 14:46:00.996730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b8df4477-ab85-483a-9509-1abbc1cb654f is claimed 00:13:31.116 [2024-11-04 14:46:00.996847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 14432f13-5b96-4975-86c8-a4a34a925d09 is claimed 00:13:31.116 [2024-11-04 14:46:00.997036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:31.116 [2024-11-04 14:46:00.997062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:31.116 [2024-11-04 14:46:00.997449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:31.116 [2024-11-04 14:46:00.997713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:31.116 [2024-11-04 14:46:00.997740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:31.116 [2024-11-04 14:46:00.997952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.116 14:46:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.116 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:31.116 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:13:31.116 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.116 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.374 [2024-11-04 14:46:01.117078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.374 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.374 [2024-11-04 14:46:01.169060] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:31.375 [2024-11-04 14:46:01.169101] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b8df4477-ab85-483a-9509-1abbc1cb654f' was resized: old size 131072, new size 204800 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.375 [2024-11-04 14:46:01.176884] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:31.375 [2024-11-04 14:46:01.176927] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '14432f13-5b96-4975-86c8-a4a34a925d09' was resized: old size 131072, new size 204800 00:13:31.375 [2024-11-04 14:46:01.176966] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:31.375 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:31.632 [2024-11-04 14:46:01.289094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.632 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.633 [2024-11-04 14:46:01.336847] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:31.633 [2024-11-04 14:46:01.336944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:31.633 [2024-11-04 14:46:01.336983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:31.633 [2024-11-04 14:46:01.337184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.633 [2024-11-04 14:46:01.337481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.633 [2024-11-04 14:46:01.337589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.633 [2024-11-04 14:46:01.337613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.633 [2024-11-04 14:46:01.344736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:31.633 [2024-11-04 14:46:01.344800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.633 [2024-11-04 14:46:01.344830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:31.633 [2024-11-04 14:46:01.344850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.633 [2024-11-04 14:46:01.347800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.633 [2024-11-04 14:46:01.347848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:31.633 pt0 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.633 [2024-11-04 14:46:01.350127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b8df4477-ab85-483a-9509-1abbc1cb654f 00:13:31.633 [2024-11-04 14:46:01.350218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b8df4477-ab85-483a-9509-1abbc1cb654f is claimed 00:13:31.633 [2024-11-04 14:46:01.350380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 14432f13-5b96-4975-86c8-a4a34a925d09 00:13:31.633 [2024-11-04 14:46:01.350427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 14432f13-5b96-4975-86c8-a4a34a925d09 is claimed 00:13:31.633 [2024-11-04 14:46:01.350590] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 14432f13-5b96-4975-86c8-a4a34a925d09 (2) smaller than existing raid bdev Raid (3) 00:13:31.633 [2024-11-04 14:46:01.350648] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b8df4477-ab85-483a-9509-1abbc1cb654f: File exists 00:13:31.633 [2024-11-04 14:46:01.350698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:31.633 [2024-11-04 14:46:01.350716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:31.633 [2024-11-04 14:46:01.351022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:31.633 [2024-11-04 14:46:01.351253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:31.633 [2024-11-04 14:46:01.351276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:31.633 [2024-11-04 14:46:01.351477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.633 [2024-11-04 14:46:01.365083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60227 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60227 ']' 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60227 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60227 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:31.633 killing process with pid 60227 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60227' 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60227 00:13:31.633 [2024-11-04 14:46:01.435761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.633 14:46:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60227 00:13:31.633 [2024-11-04 14:46:01.435864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.633 [2024-11-04 14:46:01.435935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.633 [2024-11-04 14:46:01.435949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:33.009 [2024-11-04 14:46:02.767827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.948 14:46:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:13:33.948 00:13:33.948 real 0m4.652s 00:13:33.948 user 0m4.961s 00:13:33.948 sys 0m0.663s 00:13:33.948 14:46:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.948 14:46:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.948 ************************************ 00:13:33.948 END TEST raid1_resize_superblock_test 00:13:33.948 ************************************ 00:13:34.206 14:46:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:13:34.206 14:46:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:13:34.206 14:46:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:13:34.206 14:46:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:13:34.206 14:46:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:13:34.206 14:46:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:34.206 14:46:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:34.206 14:46:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:34.206 14:46:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 ************************************ 00:13:34.206 START TEST raid_function_test_raid0 00:13:34.206 ************************************ 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60331 00:13:34.206 Process raid pid: 60331 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60331' 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60331 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60331 ']' 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:34.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:34.206 14:46:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 [2024-11-04 14:46:03.988676] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:34.207 [2024-11-04 14:46:03.988854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.465 [2024-11-04 14:46:04.178485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.725 [2024-11-04 14:46:04.359713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.725 [2024-11-04 14:46:04.611941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.725 [2024-11-04 14:46:04.612008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.293 14:46:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:35.293 14:46:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:13:35.293 14:46:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:13:35.293 14:46:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.293 14:46:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 Base_1 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 Base_2 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 [2024-11-04 14:46:05.057935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:35.293 [2024-11-04 14:46:05.060950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:35.293 [2024-11-04 14:46:05.061133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.293 [2024-11-04 14:46:05.061155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:35.293 [2024-11-04 14:46:05.061631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:35.293 [2024-11-04 14:46:05.061879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.293 [2024-11-04 14:46:05.061903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:13:35.293 [2024-11-04 14:46:05.062279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.293 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:13:35.552 [2024-11-04 14:46:05.358468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.552 /dev/nbd0 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.552 1+0 records in 00:13:35.552 1+0 records out 00:13:35.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401758 s, 10.2 MB/s 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.552 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:36.118 { 00:13:36.118 "nbd_device": "/dev/nbd0", 00:13:36.118 "bdev_name": "raid" 00:13:36.118 } 00:13:36.118 ]' 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:36.118 { 00:13:36.118 "nbd_device": "/dev/nbd0", 00:13:36.118 "bdev_name": "raid" 00:13:36.118 } 00:13:36.118 ]' 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:36.118 4096+0 records in 00:13:36.118 4096+0 records out 00:13:36.118 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0252495 s, 83.1 MB/s 00:13:36.118 14:46:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:36.376 4096+0 records in 00:13:36.376 4096+0 records out 00:13:36.376 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.375987 s, 5.6 MB/s 00:13:36.376 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:13:36.376 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:36.376 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:36.377 128+0 records in 00:13:36.377 128+0 records out 00:13:36.377 65536 bytes (66 kB, 64 KiB) copied, 0.00111061 s, 59.0 MB/s 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:36.377 2035+0 records in 00:13:36.377 2035+0 records out 00:13:36.377 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00911089 s, 114 MB/s 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:36.377 456+0 records in 00:13:36.377 456+0 records out 00:13:36.377 233472 bytes (233 kB, 228 KiB) copied, 0.00362349 s, 64.4 MB/s 00:13:36.377 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.635 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:36.893 [2024-11-04 14:46:06.557037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.893 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60331 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60331 ']' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60331 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60331 00:13:37.151 killing process with pid 60331 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60331' 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60331 00:13:37.151 [2024-11-04 14:46:06.934233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.151 14:46:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60331 00:13:37.151 [2024-11-04 14:46:06.934427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.151 [2024-11-04 14:46:06.934500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.151 [2024-11-04 14:46:06.934547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:13:37.409 [2024-11-04 14:46:07.142651] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.789 14:46:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:13:38.789 00:13:38.789 real 0m4.446s 00:13:38.789 user 0m5.285s 00:13:38.789 sys 0m1.073s 00:13:38.789 14:46:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.789 14:46:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:38.789 ************************************ 00:13:38.789 END TEST raid_function_test_raid0 00:13:38.789 ************************************ 00:13:38.789 14:46:08 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:13:38.789 14:46:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:38.789 14:46:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.789 14:46:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.789 ************************************ 00:13:38.789 START TEST raid_function_test_concat 00:13:38.789 ************************************ 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60460 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:38.789 Process raid pid: 60460 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60460' 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60460 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60460 ']' 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.789 14:46:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:38.789 [2024-11-04 14:46:08.488847] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:38.789 [2024-11-04 14:46:08.489010] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.789 [2024-11-04 14:46:08.668797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.048 [2024-11-04 14:46:08.849352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.306 [2024-11-04 14:46:09.105146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.306 [2024-11-04 14:46:09.105253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:39.873 Base_1 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:39.873 Base_2 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:39.873 [2024-11-04 14:46:09.634904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:39.873 [2024-11-04 14:46:09.637840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:39.873 [2024-11-04 14:46:09.637943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:39.873 [2024-11-04 14:46:09.637966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:39.873 [2024-11-04 14:46:09.638304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:39.873 [2024-11-04 14:46:09.638540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:39.873 [2024-11-04 14:46:09.638567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:13:39.873 [2024-11-04 14:46:09.638898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.873 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:13:40.133 [2024-11-04 14:46:09.959137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:40.133 /dev/nbd0 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:40.133 14:46:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.133 1+0 records in 00:13:40.133 1+0 records out 00:13:40.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410893 s, 10.0 MB/s 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.133 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:40.699 { 00:13:40.699 "nbd_device": "/dev/nbd0", 00:13:40.699 "bdev_name": "raid" 00:13:40.699 } 00:13:40.699 ]' 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:40.699 { 00:13:40.699 "nbd_device": "/dev/nbd0", 00:13:40.699 "bdev_name": "raid" 00:13:40.699 } 00:13:40.699 ]' 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:13:40.699 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:40.700 4096+0 records in 00:13:40.700 4096+0 records out 00:13:40.700 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0274221 s, 76.5 MB/s 00:13:40.700 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:40.959 4096+0 records in 00:13:40.959 4096+0 records out 00:13:40.959 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.350014 s, 6.0 MB/s 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:40.959 128+0 records in 00:13:40.959 128+0 records out 00:13:40.959 65536 bytes (66 kB, 64 KiB) copied, 0.000698799 s, 93.8 MB/s 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:40.959 2035+0 records in 00:13:40.959 2035+0 records out 00:13:40.959 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0102558 s, 102 MB/s 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:40.959 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:41.218 456+0 records in 00:13:41.218 456+0 records out 00:13:41.218 233472 bytes (233 kB, 228 KiB) copied, 0.00287096 s, 81.3 MB/s 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.218 14:46:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.476 [2024-11-04 14:46:11.146390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.476 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60460 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60460 ']' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60460 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60460 00:13:41.734 killing process with pid 60460 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60460' 00:13:41.734 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60460 00:13:41.735 14:46:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60460 00:13:41.735 [2024-11-04 14:46:11.503671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.735 [2024-11-04 14:46:11.503833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.735 [2024-11-04 14:46:11.503913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.735 [2024-11-04 14:46:11.503945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:13:41.993 [2024-11-04 14:46:11.702986] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.368 14:46:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:13:43.368 00:13:43.368 real 0m4.486s 00:13:43.368 user 0m5.443s 00:13:43.368 sys 0m1.054s 00:13:43.368 14:46:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:43.368 14:46:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:43.368 ************************************ 00:13:43.368 END TEST raid_function_test_concat 00:13:43.368 ************************************ 00:13:43.368 14:46:12 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:13:43.368 14:46:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:43.368 14:46:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:43.368 14:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.368 ************************************ 00:13:43.368 START TEST raid0_resize_test 00:13:43.368 ************************************ 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60599 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:43.368 Process raid pid: 60599 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60599' 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60599 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60599 ']' 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.368 14:46:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.368 [2024-11-04 14:46:13.034962] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:43.368 [2024-11-04 14:46:13.035161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.368 [2024-11-04 14:46:13.230298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.626 [2024-11-04 14:46:13.408577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.884 [2024-11-04 14:46:13.662525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.884 [2024-11-04 14:46:13.662572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.141 Base_1 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.141 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 Base_2 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [2024-11-04 14:46:14.041645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:44.399 [2024-11-04 14:46:14.044282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:44.399 [2024-11-04 14:46:14.044368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:44.399 [2024-11-04 14:46:14.044387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:44.399 [2024-11-04 14:46:14.044690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:44.399 [2024-11-04 14:46:14.044861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:44.399 [2024-11-04 14:46:14.044878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:44.399 [2024-11-04 14:46:14.045049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [2024-11-04 14:46:14.049627] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:44.399 [2024-11-04 14:46:14.049666] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:44.399 true 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [2024-11-04 14:46:14.061834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [2024-11-04 14:46:14.113624] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:44.399 [2024-11-04 14:46:14.113656] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:44.399 [2024-11-04 14:46:14.113691] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:44.399 true 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:13:44.399 [2024-11-04 14:46:14.125843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60599 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60599 ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60599 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60599 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:44.399 killing process with pid 60599 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60599' 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60599 00:13:44.399 [2024-11-04 14:46:14.197514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.399 14:46:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60599 00:13:44.399 [2024-11-04 14:46:14.197646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.399 [2024-11-04 14:46:14.197724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.399 [2024-11-04 14:46:14.197740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:44.399 [2024-11-04 14:46:14.213868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.773 14:46:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:13:45.773 00:13:45.773 real 0m2.439s 00:13:45.773 user 0m2.626s 00:13:45.773 sys 0m0.450s 00:13:45.773 14:46:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.773 14:46:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.773 ************************************ 00:13:45.773 END TEST raid0_resize_test 00:13:45.773 ************************************ 00:13:45.773 14:46:15 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:13:45.773 14:46:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:45.773 14:46:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.773 14:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.773 ************************************ 00:13:45.773 START TEST raid1_resize_test 00:13:45.773 ************************************ 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60656 00:13:45.773 Process raid pid: 60656 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60656' 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60656 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60656 ']' 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.773 14:46:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.773 [2024-11-04 14:46:15.532139] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:45.774 [2024-11-04 14:46:15.532367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.043 [2024-11-04 14:46:15.726214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.043 [2024-11-04 14:46:15.878256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.302 [2024-11-04 14:46:16.120788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.302 [2024-11-04 14:46:16.120838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 Base_1 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 Base_2 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 [2024-11-04 14:46:16.546886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:46.869 [2024-11-04 14:46:16.549593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:46.869 [2024-11-04 14:46:16.549678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:46.869 [2024-11-04 14:46:16.549699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:46.869 [2024-11-04 14:46:16.550012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:46.869 [2024-11-04 14:46:16.550197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:46.869 [2024-11-04 14:46:16.550220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:46.869 [2024-11-04 14:46:16.550418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 [2024-11-04 14:46:16.554867] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:46.869 [2024-11-04 14:46:16.554908] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:46.869 true 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 [2024-11-04 14:46:16.571093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 [2024-11-04 14:46:16.618897] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:46.869 [2024-11-04 14:46:16.618939] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:46.869 [2024-11-04 14:46:16.618985] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:13:46.869 true 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.869 [2024-11-04 14:46:16.631126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60656 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60656 ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60656 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60656 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:46.869 killing process with pid 60656 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60656' 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60656 00:13:46.869 [2024-11-04 14:46:16.728917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.869 14:46:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60656 00:13:46.869 [2024-11-04 14:46:16.729039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.869 [2024-11-04 14:46:16.729740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.869 [2024-11-04 14:46:16.729775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:46.869 [2024-11-04 14:46:16.745623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.245 14:46:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:13:48.245 00:13:48.245 real 0m2.454s 00:13:48.245 user 0m2.682s 00:13:48.245 sys 0m0.443s 00:13:48.245 14:46:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.245 14:46:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.245 ************************************ 00:13:48.245 END TEST raid1_resize_test 00:13:48.245 ************************************ 00:13:48.245 14:46:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:48.245 14:46:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:48.245 14:46:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:48.245 14:46:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:48.245 14:46:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.245 14:46:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.245 ************************************ 00:13:48.245 START TEST raid_state_function_test 00:13:48.245 ************************************ 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60719 00:13:48.245 Process raid pid: 60719 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60719' 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60719 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60719 ']' 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:48.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:48.245 14:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.245 [2024-11-04 14:46:18.040956] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:48.245 [2024-11-04 14:46:18.041138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.505 [2024-11-04 14:46:18.226320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.505 [2024-11-04 14:46:18.382255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.763 [2024-11-04 14:46:18.634646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.763 [2024-11-04 14:46:18.634725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.330 [2024-11-04 14:46:19.012709] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.330 [2024-11-04 14:46:19.012845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.330 [2024-11-04 14:46:19.012870] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.330 [2024-11-04 14:46:19.012895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.330 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.331 "name": "Existed_Raid", 00:13:49.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.331 "strip_size_kb": 64, 00:13:49.331 "state": "configuring", 00:13:49.331 "raid_level": "raid0", 00:13:49.331 "superblock": false, 00:13:49.331 "num_base_bdevs": 2, 00:13:49.331 "num_base_bdevs_discovered": 0, 00:13:49.331 "num_base_bdevs_operational": 2, 00:13:49.331 "base_bdevs_list": [ 00:13:49.331 { 00:13:49.331 "name": "BaseBdev1", 00:13:49.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.331 "is_configured": false, 00:13:49.331 "data_offset": 0, 00:13:49.331 "data_size": 0 00:13:49.331 }, 00:13:49.331 { 00:13:49.331 "name": "BaseBdev2", 00:13:49.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.331 "is_configured": false, 00:13:49.331 "data_offset": 0, 00:13:49.331 "data_size": 0 00:13:49.331 } 00:13:49.331 ] 00:13:49.331 }' 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.331 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 [2024-11-04 14:46:19.524727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.900 [2024-11-04 14:46:19.524806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 [2024-11-04 14:46:19.532669] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.900 [2024-11-04 14:46:19.532737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.900 [2024-11-04 14:46:19.532753] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.900 [2024-11-04 14:46:19.532788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 [2024-11-04 14:46:19.583936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.900 BaseBdev1 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.900 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.900 [ 00:13:49.900 { 00:13:49.900 "name": "BaseBdev1", 00:13:49.900 "aliases": [ 00:13:49.900 "83270274-dbbe-4928-89a2-dc4fef6319ba" 00:13:49.900 ], 00:13:49.900 "product_name": "Malloc disk", 00:13:49.900 "block_size": 512, 00:13:49.900 "num_blocks": 65536, 00:13:49.900 "uuid": "83270274-dbbe-4928-89a2-dc4fef6319ba", 00:13:49.900 "assigned_rate_limits": { 00:13:49.900 "rw_ios_per_sec": 0, 00:13:49.900 "rw_mbytes_per_sec": 0, 00:13:49.900 "r_mbytes_per_sec": 0, 00:13:49.900 "w_mbytes_per_sec": 0 00:13:49.900 }, 00:13:49.900 "claimed": true, 00:13:49.900 "claim_type": "exclusive_write", 00:13:49.900 "zoned": false, 00:13:49.900 "supported_io_types": { 00:13:49.900 "read": true, 00:13:49.900 "write": true, 00:13:49.900 "unmap": true, 00:13:49.900 "flush": true, 00:13:49.900 "reset": true, 00:13:49.900 "nvme_admin": false, 00:13:49.900 "nvme_io": false, 00:13:49.900 "nvme_io_md": false, 00:13:49.900 "write_zeroes": true, 00:13:49.900 "zcopy": true, 00:13:49.900 "get_zone_info": false, 00:13:49.900 "zone_management": false, 00:13:49.900 "zone_append": false, 00:13:49.900 "compare": false, 00:13:49.900 "compare_and_write": false, 00:13:49.900 "abort": true, 00:13:49.900 "seek_hole": false, 00:13:49.900 "seek_data": false, 00:13:49.900 "copy": true, 00:13:49.900 "nvme_iov_md": false 00:13:49.900 }, 00:13:49.900 "memory_domains": [ 00:13:49.900 { 00:13:49.900 "dma_device_id": "system", 00:13:49.901 "dma_device_type": 1 00:13:49.901 }, 00:13:49.901 { 00:13:49.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.901 "dma_device_type": 2 00:13:49.901 } 00:13:49.901 ], 00:13:49.901 "driver_specific": {} 00:13:49.901 } 00:13:49.901 ] 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.901 "name": "Existed_Raid", 00:13:49.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.901 "strip_size_kb": 64, 00:13:49.901 "state": "configuring", 00:13:49.901 "raid_level": "raid0", 00:13:49.901 "superblock": false, 00:13:49.901 "num_base_bdevs": 2, 00:13:49.901 "num_base_bdevs_discovered": 1, 00:13:49.901 "num_base_bdevs_operational": 2, 00:13:49.901 "base_bdevs_list": [ 00:13:49.901 { 00:13:49.901 "name": "BaseBdev1", 00:13:49.901 "uuid": "83270274-dbbe-4928-89a2-dc4fef6319ba", 00:13:49.901 "is_configured": true, 00:13:49.901 "data_offset": 0, 00:13:49.901 "data_size": 65536 00:13:49.901 }, 00:13:49.901 { 00:13:49.901 "name": "BaseBdev2", 00:13:49.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.901 "is_configured": false, 00:13:49.901 "data_offset": 0, 00:13:49.901 "data_size": 0 00:13:49.901 } 00:13:49.901 ] 00:13:49.901 }' 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.901 14:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 [2024-11-04 14:46:20.176221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.468 [2024-11-04 14:46:20.176327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 [2024-11-04 14:46:20.184256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.468 [2024-11-04 14:46:20.187223] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.468 [2024-11-04 14:46:20.187289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.468 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.469 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.469 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.469 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.469 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.469 "name": "Existed_Raid", 00:13:50.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.469 "strip_size_kb": 64, 00:13:50.469 "state": "configuring", 00:13:50.469 "raid_level": "raid0", 00:13:50.469 "superblock": false, 00:13:50.469 "num_base_bdevs": 2, 00:13:50.469 "num_base_bdevs_discovered": 1, 00:13:50.469 "num_base_bdevs_operational": 2, 00:13:50.469 "base_bdevs_list": [ 00:13:50.469 { 00:13:50.469 "name": "BaseBdev1", 00:13:50.469 "uuid": "83270274-dbbe-4928-89a2-dc4fef6319ba", 00:13:50.469 "is_configured": true, 00:13:50.469 "data_offset": 0, 00:13:50.469 "data_size": 65536 00:13:50.469 }, 00:13:50.469 { 00:13:50.469 "name": "BaseBdev2", 00:13:50.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.469 "is_configured": false, 00:13:50.469 "data_offset": 0, 00:13:50.469 "data_size": 0 00:13:50.469 } 00:13:50.469 ] 00:13:50.469 }' 00:13:50.469 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.469 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.037 [2024-11-04 14:46:20.758007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.037 [2024-11-04 14:46:20.758095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:51.037 [2024-11-04 14:46:20.758111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:51.037 [2024-11-04 14:46:20.758488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:51.037 [2024-11-04 14:46:20.758724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:51.037 [2024-11-04 14:46:20.758758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:51.037 [2024-11-04 14:46:20.759145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.037 BaseBdev2 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.037 [ 00:13:51.037 { 00:13:51.037 "name": "BaseBdev2", 00:13:51.037 "aliases": [ 00:13:51.037 "047802dd-ac0c-45e6-b4e4-6621f0db259d" 00:13:51.037 ], 00:13:51.037 "product_name": "Malloc disk", 00:13:51.037 "block_size": 512, 00:13:51.037 "num_blocks": 65536, 00:13:51.037 "uuid": "047802dd-ac0c-45e6-b4e4-6621f0db259d", 00:13:51.037 "assigned_rate_limits": { 00:13:51.037 "rw_ios_per_sec": 0, 00:13:51.037 "rw_mbytes_per_sec": 0, 00:13:51.037 "r_mbytes_per_sec": 0, 00:13:51.037 "w_mbytes_per_sec": 0 00:13:51.037 }, 00:13:51.037 "claimed": true, 00:13:51.037 "claim_type": "exclusive_write", 00:13:51.037 "zoned": false, 00:13:51.037 "supported_io_types": { 00:13:51.037 "read": true, 00:13:51.037 "write": true, 00:13:51.037 "unmap": true, 00:13:51.037 "flush": true, 00:13:51.037 "reset": true, 00:13:51.037 "nvme_admin": false, 00:13:51.037 "nvme_io": false, 00:13:51.037 "nvme_io_md": false, 00:13:51.037 "write_zeroes": true, 00:13:51.037 "zcopy": true, 00:13:51.037 "get_zone_info": false, 00:13:51.037 "zone_management": false, 00:13:51.037 "zone_append": false, 00:13:51.037 "compare": false, 00:13:51.037 "compare_and_write": false, 00:13:51.037 "abort": true, 00:13:51.037 "seek_hole": false, 00:13:51.037 "seek_data": false, 00:13:51.037 "copy": true, 00:13:51.037 "nvme_iov_md": false 00:13:51.037 }, 00:13:51.037 "memory_domains": [ 00:13:51.037 { 00:13:51.037 "dma_device_id": "system", 00:13:51.037 "dma_device_type": 1 00:13:51.037 }, 00:13:51.037 { 00:13:51.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.037 "dma_device_type": 2 00:13:51.037 } 00:13:51.037 ], 00:13:51.037 "driver_specific": {} 00:13:51.037 } 00:13:51.037 ] 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.037 "name": "Existed_Raid", 00:13:51.037 "uuid": "48ab1445-3414-4a17-b4ed-169832b982de", 00:13:51.037 "strip_size_kb": 64, 00:13:51.037 "state": "online", 00:13:51.037 "raid_level": "raid0", 00:13:51.037 "superblock": false, 00:13:51.037 "num_base_bdevs": 2, 00:13:51.037 "num_base_bdevs_discovered": 2, 00:13:51.037 "num_base_bdevs_operational": 2, 00:13:51.037 "base_bdevs_list": [ 00:13:51.037 { 00:13:51.037 "name": "BaseBdev1", 00:13:51.037 "uuid": "83270274-dbbe-4928-89a2-dc4fef6319ba", 00:13:51.037 "is_configured": true, 00:13:51.037 "data_offset": 0, 00:13:51.037 "data_size": 65536 00:13:51.037 }, 00:13:51.037 { 00:13:51.037 "name": "BaseBdev2", 00:13:51.037 "uuid": "047802dd-ac0c-45e6-b4e4-6621f0db259d", 00:13:51.037 "is_configured": true, 00:13:51.037 "data_offset": 0, 00:13:51.037 "data_size": 65536 00:13:51.037 } 00:13:51.037 ] 00:13:51.037 }' 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.037 14:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.604 [2024-11-04 14:46:21.322667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.604 "name": "Existed_Raid", 00:13:51.604 "aliases": [ 00:13:51.604 "48ab1445-3414-4a17-b4ed-169832b982de" 00:13:51.604 ], 00:13:51.604 "product_name": "Raid Volume", 00:13:51.604 "block_size": 512, 00:13:51.604 "num_blocks": 131072, 00:13:51.604 "uuid": "48ab1445-3414-4a17-b4ed-169832b982de", 00:13:51.604 "assigned_rate_limits": { 00:13:51.604 "rw_ios_per_sec": 0, 00:13:51.604 "rw_mbytes_per_sec": 0, 00:13:51.604 "r_mbytes_per_sec": 0, 00:13:51.604 "w_mbytes_per_sec": 0 00:13:51.604 }, 00:13:51.604 "claimed": false, 00:13:51.604 "zoned": false, 00:13:51.604 "supported_io_types": { 00:13:51.604 "read": true, 00:13:51.604 "write": true, 00:13:51.604 "unmap": true, 00:13:51.604 "flush": true, 00:13:51.604 "reset": true, 00:13:51.604 "nvme_admin": false, 00:13:51.604 "nvme_io": false, 00:13:51.604 "nvme_io_md": false, 00:13:51.604 "write_zeroes": true, 00:13:51.604 "zcopy": false, 00:13:51.604 "get_zone_info": false, 00:13:51.604 "zone_management": false, 00:13:51.604 "zone_append": false, 00:13:51.604 "compare": false, 00:13:51.604 "compare_and_write": false, 00:13:51.604 "abort": false, 00:13:51.604 "seek_hole": false, 00:13:51.604 "seek_data": false, 00:13:51.604 "copy": false, 00:13:51.604 "nvme_iov_md": false 00:13:51.604 }, 00:13:51.604 "memory_domains": [ 00:13:51.604 { 00:13:51.604 "dma_device_id": "system", 00:13:51.604 "dma_device_type": 1 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.604 "dma_device_type": 2 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "dma_device_id": "system", 00:13:51.604 "dma_device_type": 1 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.604 "dma_device_type": 2 00:13:51.604 } 00:13:51.604 ], 00:13:51.604 "driver_specific": { 00:13:51.604 "raid": { 00:13:51.604 "uuid": "48ab1445-3414-4a17-b4ed-169832b982de", 00:13:51.604 "strip_size_kb": 64, 00:13:51.604 "state": "online", 00:13:51.604 "raid_level": "raid0", 00:13:51.604 "superblock": false, 00:13:51.604 "num_base_bdevs": 2, 00:13:51.604 "num_base_bdevs_discovered": 2, 00:13:51.604 "num_base_bdevs_operational": 2, 00:13:51.604 "base_bdevs_list": [ 00:13:51.604 { 00:13:51.604 "name": "BaseBdev1", 00:13:51.604 "uuid": "83270274-dbbe-4928-89a2-dc4fef6319ba", 00:13:51.604 "is_configured": true, 00:13:51.604 "data_offset": 0, 00:13:51.604 "data_size": 65536 00:13:51.604 }, 00:13:51.604 { 00:13:51.604 "name": "BaseBdev2", 00:13:51.604 "uuid": "047802dd-ac0c-45e6-b4e4-6621f0db259d", 00:13:51.604 "is_configured": true, 00:13:51.604 "data_offset": 0, 00:13:51.604 "data_size": 65536 00:13:51.604 } 00:13:51.604 ] 00:13:51.604 } 00:13:51.604 } 00:13:51.604 }' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:51.604 BaseBdev2' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.604 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.886 [2024-11-04 14:46:21.586479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.886 [2024-11-04 14:46:21.586544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.886 [2024-11-04 14:46:21.586623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:51.886 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.887 "name": "Existed_Raid", 00:13:51.887 "uuid": "48ab1445-3414-4a17-b4ed-169832b982de", 00:13:51.887 "strip_size_kb": 64, 00:13:51.887 "state": "offline", 00:13:51.887 "raid_level": "raid0", 00:13:51.887 "superblock": false, 00:13:51.887 "num_base_bdevs": 2, 00:13:51.887 "num_base_bdevs_discovered": 1, 00:13:51.887 "num_base_bdevs_operational": 1, 00:13:51.887 "base_bdevs_list": [ 00:13:51.887 { 00:13:51.887 "name": null, 00:13:51.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.887 "is_configured": false, 00:13:51.887 "data_offset": 0, 00:13:51.887 "data_size": 65536 00:13:51.887 }, 00:13:51.887 { 00:13:51.887 "name": "BaseBdev2", 00:13:51.887 "uuid": "047802dd-ac0c-45e6-b4e4-6621f0db259d", 00:13:51.887 "is_configured": true, 00:13:51.887 "data_offset": 0, 00:13:51.887 "data_size": 65536 00:13:51.887 } 00:13:51.887 ] 00:13:51.887 }' 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.887 14:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.453 [2024-11-04 14:46:22.212891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.453 [2024-11-04 14:46:22.212972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.453 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.711 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:52.711 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:52.711 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:52.711 14:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60719 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60719 ']' 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60719 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60719 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.712 killing process with pid 60719 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60719' 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60719 00:13:52.712 14:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60719 00:13:52.712 [2024-11-04 14:46:22.397554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.712 [2024-11-04 14:46:22.413877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:54.086 00:13:54.086 real 0m5.640s 00:13:54.086 user 0m8.360s 00:13:54.086 sys 0m0.884s 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.086 ************************************ 00:13:54.086 END TEST raid_state_function_test 00:13:54.086 ************************************ 00:13:54.086 14:46:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:54.086 14:46:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:54.086 14:46:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.086 14:46:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.086 ************************************ 00:13:54.086 START TEST raid_state_function_test_sb 00:13:54.086 ************************************ 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:54.086 Process raid pid: 60977 00:13:54.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60977 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60977' 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60977 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60977 ']' 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:54.086 14:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.086 [2024-11-04 14:46:23.744776] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:54.086 [2024-11-04 14:46:23.744971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.086 [2024-11-04 14:46:23.923670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.344 [2024-11-04 14:46:24.079905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.602 [2024-11-04 14:46:24.318485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.602 [2024-11-04 14:46:24.318557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.232 [2024-11-04 14:46:24.786576] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.232 [2024-11-04 14:46:24.786654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.232 [2024-11-04 14:46:24.786673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.232 [2024-11-04 14:46:24.786691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.232 "name": "Existed_Raid", 00:13:55.232 "uuid": "35f7ee8a-1015-48a5-8508-898fceddaf60", 00:13:55.232 "strip_size_kb": 64, 00:13:55.232 "state": "configuring", 00:13:55.232 "raid_level": "raid0", 00:13:55.232 "superblock": true, 00:13:55.232 "num_base_bdevs": 2, 00:13:55.232 "num_base_bdevs_discovered": 0, 00:13:55.232 "num_base_bdevs_operational": 2, 00:13:55.232 "base_bdevs_list": [ 00:13:55.232 { 00:13:55.232 "name": "BaseBdev1", 00:13:55.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.232 "is_configured": false, 00:13:55.232 "data_offset": 0, 00:13:55.232 "data_size": 0 00:13:55.232 }, 00:13:55.232 { 00:13:55.232 "name": "BaseBdev2", 00:13:55.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.232 "is_configured": false, 00:13:55.232 "data_offset": 0, 00:13:55.232 "data_size": 0 00:13:55.232 } 00:13:55.232 ] 00:13:55.232 }' 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.232 14:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 [2024-11-04 14:46:25.302647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.491 [2024-11-04 14:46:25.302718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 [2024-11-04 14:46:25.310642] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.491 [2024-11-04 14:46:25.310698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.491 [2024-11-04 14:46:25.310715] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.491 [2024-11-04 14:46:25.310736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 [2024-11-04 14:46:25.359690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.491 BaseBdev1 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.491 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 [ 00:13:55.491 { 00:13:55.491 "name": "BaseBdev1", 00:13:55.491 "aliases": [ 00:13:55.491 "657f76fe-cb90-455b-b950-87885971d538" 00:13:55.491 ], 00:13:55.491 "product_name": "Malloc disk", 00:13:55.491 "block_size": 512, 00:13:55.491 "num_blocks": 65536, 00:13:55.491 "uuid": "657f76fe-cb90-455b-b950-87885971d538", 00:13:55.491 "assigned_rate_limits": { 00:13:55.491 "rw_ios_per_sec": 0, 00:13:55.491 "rw_mbytes_per_sec": 0, 00:13:55.491 "r_mbytes_per_sec": 0, 00:13:55.491 "w_mbytes_per_sec": 0 00:13:55.491 }, 00:13:55.491 "claimed": true, 00:13:55.491 "claim_type": "exclusive_write", 00:13:55.491 "zoned": false, 00:13:55.491 "supported_io_types": { 00:13:55.491 "read": true, 00:13:55.491 "write": true, 00:13:55.491 "unmap": true, 00:13:55.491 "flush": true, 00:13:55.491 "reset": true, 00:13:55.491 "nvme_admin": false, 00:13:55.491 "nvme_io": false, 00:13:55.491 "nvme_io_md": false, 00:13:55.491 "write_zeroes": true, 00:13:55.491 "zcopy": true, 00:13:55.491 "get_zone_info": false, 00:13:55.491 "zone_management": false, 00:13:55.491 "zone_append": false, 00:13:55.491 "compare": false, 00:13:55.750 "compare_and_write": false, 00:13:55.750 "abort": true, 00:13:55.750 "seek_hole": false, 00:13:55.750 "seek_data": false, 00:13:55.750 "copy": true, 00:13:55.750 "nvme_iov_md": false 00:13:55.750 }, 00:13:55.750 "memory_domains": [ 00:13:55.750 { 00:13:55.750 "dma_device_id": "system", 00:13:55.750 "dma_device_type": 1 00:13:55.750 }, 00:13:55.750 { 00:13:55.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.750 "dma_device_type": 2 00:13:55.750 } 00:13:55.750 ], 00:13:55.750 "driver_specific": {} 00:13:55.750 } 00:13:55.750 ] 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.750 "name": "Existed_Raid", 00:13:55.750 "uuid": "e7c7e793-fcd6-4768-a800-be82a1cf56c5", 00:13:55.750 "strip_size_kb": 64, 00:13:55.750 "state": "configuring", 00:13:55.750 "raid_level": "raid0", 00:13:55.750 "superblock": true, 00:13:55.750 "num_base_bdevs": 2, 00:13:55.750 "num_base_bdevs_discovered": 1, 00:13:55.750 "num_base_bdevs_operational": 2, 00:13:55.750 "base_bdevs_list": [ 00:13:55.750 { 00:13:55.750 "name": "BaseBdev1", 00:13:55.750 "uuid": "657f76fe-cb90-455b-b950-87885971d538", 00:13:55.750 "is_configured": true, 00:13:55.750 "data_offset": 2048, 00:13:55.750 "data_size": 63488 00:13:55.750 }, 00:13:55.750 { 00:13:55.750 "name": "BaseBdev2", 00:13:55.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.750 "is_configured": false, 00:13:55.750 "data_offset": 0, 00:13:55.750 "data_size": 0 00:13:55.750 } 00:13:55.750 ] 00:13:55.750 }' 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.750 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.316 [2024-11-04 14:46:25.923981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.316 [2024-11-04 14:46:25.924061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.316 [2024-11-04 14:46:25.932030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.316 [2024-11-04 14:46:25.935096] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.316 [2024-11-04 14:46:25.935156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.316 "name": "Existed_Raid", 00:13:56.316 "uuid": "807dced4-69e4-4a39-b235-e45d9c5c39df", 00:13:56.316 "strip_size_kb": 64, 00:13:56.316 "state": "configuring", 00:13:56.316 "raid_level": "raid0", 00:13:56.316 "superblock": true, 00:13:56.316 "num_base_bdevs": 2, 00:13:56.316 "num_base_bdevs_discovered": 1, 00:13:56.316 "num_base_bdevs_operational": 2, 00:13:56.316 "base_bdevs_list": [ 00:13:56.316 { 00:13:56.316 "name": "BaseBdev1", 00:13:56.316 "uuid": "657f76fe-cb90-455b-b950-87885971d538", 00:13:56.316 "is_configured": true, 00:13:56.316 "data_offset": 2048, 00:13:56.316 "data_size": 63488 00:13:56.316 }, 00:13:56.316 { 00:13:56.316 "name": "BaseBdev2", 00:13:56.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.316 "is_configured": false, 00:13:56.316 "data_offset": 0, 00:13:56.316 "data_size": 0 00:13:56.316 } 00:13:56.316 ] 00:13:56.316 }' 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.316 14:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.574 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.574 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.574 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.832 [2024-11-04 14:46:26.493069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.832 [2024-11-04 14:46:26.493513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:56.832 [2024-11-04 14:46:26.493535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:56.832 BaseBdev2 00:13:56.832 [2024-11-04 14:46:26.493907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:56.832 [2024-11-04 14:46:26.494098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:56.832 [2024-11-04 14:46:26.494119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:56.832 [2024-11-04 14:46:26.494354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.832 [ 00:13:56.832 { 00:13:56.832 "name": "BaseBdev2", 00:13:56.832 "aliases": [ 00:13:56.832 "c91dcc72-5f22-4349-b5c7-ae923372d799" 00:13:56.832 ], 00:13:56.832 "product_name": "Malloc disk", 00:13:56.832 "block_size": 512, 00:13:56.832 "num_blocks": 65536, 00:13:56.832 "uuid": "c91dcc72-5f22-4349-b5c7-ae923372d799", 00:13:56.832 "assigned_rate_limits": { 00:13:56.832 "rw_ios_per_sec": 0, 00:13:56.832 "rw_mbytes_per_sec": 0, 00:13:56.832 "r_mbytes_per_sec": 0, 00:13:56.832 "w_mbytes_per_sec": 0 00:13:56.832 }, 00:13:56.832 "claimed": true, 00:13:56.832 "claim_type": "exclusive_write", 00:13:56.832 "zoned": false, 00:13:56.832 "supported_io_types": { 00:13:56.832 "read": true, 00:13:56.832 "write": true, 00:13:56.832 "unmap": true, 00:13:56.832 "flush": true, 00:13:56.832 "reset": true, 00:13:56.832 "nvme_admin": false, 00:13:56.832 "nvme_io": false, 00:13:56.832 "nvme_io_md": false, 00:13:56.832 "write_zeroes": true, 00:13:56.832 "zcopy": true, 00:13:56.832 "get_zone_info": false, 00:13:56.832 "zone_management": false, 00:13:56.832 "zone_append": false, 00:13:56.832 "compare": false, 00:13:56.832 "compare_and_write": false, 00:13:56.832 "abort": true, 00:13:56.832 "seek_hole": false, 00:13:56.832 "seek_data": false, 00:13:56.832 "copy": true, 00:13:56.832 "nvme_iov_md": false 00:13:56.832 }, 00:13:56.832 "memory_domains": [ 00:13:56.832 { 00:13:56.832 "dma_device_id": "system", 00:13:56.832 "dma_device_type": 1 00:13:56.832 }, 00:13:56.832 { 00:13:56.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.832 "dma_device_type": 2 00:13:56.832 } 00:13:56.832 ], 00:13:56.832 "driver_specific": {} 00:13:56.832 } 00:13:56.832 ] 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:56.832 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.833 "name": "Existed_Raid", 00:13:56.833 "uuid": "807dced4-69e4-4a39-b235-e45d9c5c39df", 00:13:56.833 "strip_size_kb": 64, 00:13:56.833 "state": "online", 00:13:56.833 "raid_level": "raid0", 00:13:56.833 "superblock": true, 00:13:56.833 "num_base_bdevs": 2, 00:13:56.833 "num_base_bdevs_discovered": 2, 00:13:56.833 "num_base_bdevs_operational": 2, 00:13:56.833 "base_bdevs_list": [ 00:13:56.833 { 00:13:56.833 "name": "BaseBdev1", 00:13:56.833 "uuid": "657f76fe-cb90-455b-b950-87885971d538", 00:13:56.833 "is_configured": true, 00:13:56.833 "data_offset": 2048, 00:13:56.833 "data_size": 63488 00:13:56.833 }, 00:13:56.833 { 00:13:56.833 "name": "BaseBdev2", 00:13:56.833 "uuid": "c91dcc72-5f22-4349-b5c7-ae923372d799", 00:13:56.833 "is_configured": true, 00:13:56.833 "data_offset": 2048, 00:13:56.833 "data_size": 63488 00:13:56.833 } 00:13:56.833 ] 00:13:56.833 }' 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.833 14:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.399 [2024-11-04 14:46:27.061756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.399 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.399 "name": "Existed_Raid", 00:13:57.399 "aliases": [ 00:13:57.399 "807dced4-69e4-4a39-b235-e45d9c5c39df" 00:13:57.399 ], 00:13:57.399 "product_name": "Raid Volume", 00:13:57.399 "block_size": 512, 00:13:57.399 "num_blocks": 126976, 00:13:57.399 "uuid": "807dced4-69e4-4a39-b235-e45d9c5c39df", 00:13:57.399 "assigned_rate_limits": { 00:13:57.399 "rw_ios_per_sec": 0, 00:13:57.399 "rw_mbytes_per_sec": 0, 00:13:57.399 "r_mbytes_per_sec": 0, 00:13:57.399 "w_mbytes_per_sec": 0 00:13:57.399 }, 00:13:57.399 "claimed": false, 00:13:57.399 "zoned": false, 00:13:57.399 "supported_io_types": { 00:13:57.399 "read": true, 00:13:57.399 "write": true, 00:13:57.399 "unmap": true, 00:13:57.399 "flush": true, 00:13:57.399 "reset": true, 00:13:57.399 "nvme_admin": false, 00:13:57.399 "nvme_io": false, 00:13:57.399 "nvme_io_md": false, 00:13:57.399 "write_zeroes": true, 00:13:57.399 "zcopy": false, 00:13:57.399 "get_zone_info": false, 00:13:57.399 "zone_management": false, 00:13:57.399 "zone_append": false, 00:13:57.399 "compare": false, 00:13:57.399 "compare_and_write": false, 00:13:57.399 "abort": false, 00:13:57.399 "seek_hole": false, 00:13:57.399 "seek_data": false, 00:13:57.399 "copy": false, 00:13:57.399 "nvme_iov_md": false 00:13:57.399 }, 00:13:57.399 "memory_domains": [ 00:13:57.399 { 00:13:57.399 "dma_device_id": "system", 00:13:57.399 "dma_device_type": 1 00:13:57.399 }, 00:13:57.399 { 00:13:57.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.399 "dma_device_type": 2 00:13:57.399 }, 00:13:57.399 { 00:13:57.399 "dma_device_id": "system", 00:13:57.399 "dma_device_type": 1 00:13:57.399 }, 00:13:57.400 { 00:13:57.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.400 "dma_device_type": 2 00:13:57.400 } 00:13:57.400 ], 00:13:57.400 "driver_specific": { 00:13:57.400 "raid": { 00:13:57.400 "uuid": "807dced4-69e4-4a39-b235-e45d9c5c39df", 00:13:57.400 "strip_size_kb": 64, 00:13:57.400 "state": "online", 00:13:57.400 "raid_level": "raid0", 00:13:57.400 "superblock": true, 00:13:57.400 "num_base_bdevs": 2, 00:13:57.400 "num_base_bdevs_discovered": 2, 00:13:57.400 "num_base_bdevs_operational": 2, 00:13:57.400 "base_bdevs_list": [ 00:13:57.400 { 00:13:57.400 "name": "BaseBdev1", 00:13:57.400 "uuid": "657f76fe-cb90-455b-b950-87885971d538", 00:13:57.400 "is_configured": true, 00:13:57.400 "data_offset": 2048, 00:13:57.400 "data_size": 63488 00:13:57.400 }, 00:13:57.400 { 00:13:57.400 "name": "BaseBdev2", 00:13:57.400 "uuid": "c91dcc72-5f22-4349-b5c7-ae923372d799", 00:13:57.400 "is_configured": true, 00:13:57.400 "data_offset": 2048, 00:13:57.400 "data_size": 63488 00:13:57.400 } 00:13:57.400 ] 00:13:57.400 } 00:13:57.400 } 00:13:57.400 }' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:57.400 BaseBdev2' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.400 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.658 [2024-11-04 14:46:27.309566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.658 [2024-11-04 14:46:27.309803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.658 [2024-11-04 14:46:27.310010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.658 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.659 "name": "Existed_Raid", 00:13:57.659 "uuid": "807dced4-69e4-4a39-b235-e45d9c5c39df", 00:13:57.659 "strip_size_kb": 64, 00:13:57.659 "state": "offline", 00:13:57.659 "raid_level": "raid0", 00:13:57.659 "superblock": true, 00:13:57.659 "num_base_bdevs": 2, 00:13:57.659 "num_base_bdevs_discovered": 1, 00:13:57.659 "num_base_bdevs_operational": 1, 00:13:57.659 "base_bdevs_list": [ 00:13:57.659 { 00:13:57.659 "name": null, 00:13:57.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.659 "is_configured": false, 00:13:57.659 "data_offset": 0, 00:13:57.659 "data_size": 63488 00:13:57.659 }, 00:13:57.659 { 00:13:57.659 "name": "BaseBdev2", 00:13:57.659 "uuid": "c91dcc72-5f22-4349-b5c7-ae923372d799", 00:13:57.659 "is_configured": true, 00:13:57.659 "data_offset": 2048, 00:13:57.659 "data_size": 63488 00:13:57.659 } 00:13:57.659 ] 00:13:57.659 }' 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.659 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.232 14:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.232 [2024-11-04 14:46:28.007390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.232 [2024-11-04 14:46:28.007634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.232 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60977 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60977 ']' 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60977 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60977 00:13:58.491 killing process with pid 60977 00:13:58.491 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:58.492 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:58.492 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60977' 00:13:58.492 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60977 00:13:58.492 14:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60977 00:13:58.492 [2024-11-04 14:46:28.197930] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.492 [2024-11-04 14:46:28.213556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.868 ************************************ 00:13:59.868 END TEST raid_state_function_test_sb 00:13:59.868 ************************************ 00:13:59.868 14:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:59.868 00:13:59.868 real 0m5.778s 00:13:59.868 user 0m8.621s 00:13:59.868 sys 0m0.852s 00:13:59.868 14:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:59.868 14:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.868 14:46:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:59.868 14:46:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:59.868 14:46:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:59.868 14:46:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.868 ************************************ 00:13:59.868 START TEST raid_superblock_test 00:13:59.868 ************************************ 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61235 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61235 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61235 ']' 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.868 14:46:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.868 [2024-11-04 14:46:29.543272] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:13:59.868 [2024-11-04 14:46:29.543625] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:13:59.868 [2024-11-04 14:46:29.716492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.127 [2024-11-04 14:46:29.894822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.384 [2024-11-04 14:46:30.164568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.385 [2024-11-04 14:46:30.164665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.952 malloc1 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.952 [2024-11-04 14:46:30.665899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:00.952 [2024-11-04 14:46:30.666076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.952 [2024-11-04 14:46:30.666129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.952 [2024-11-04 14:46:30.666158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.952 [2024-11-04 14:46:30.669780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.952 [2024-11-04 14:46:30.669944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:00.952 pt1 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.952 malloc2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.952 [2024-11-04 14:46:30.729517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.952 [2024-11-04 14:46:30.729851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.952 [2024-11-04 14:46:30.729954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.952 [2024-11-04 14:46:30.730101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.952 [2024-11-04 14:46:30.733349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.952 [2024-11-04 14:46:30.733513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.952 pt2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.952 [2024-11-04 14:46:30.741862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:00.952 [2024-11-04 14:46:30.744711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.952 [2024-11-04 14:46:30.744929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:00.952 [2024-11-04 14:46:30.744949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:00.952 [2024-11-04 14:46:30.745278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:00.952 [2024-11-04 14:46:30.745502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:00.952 [2024-11-04 14:46:30.745525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:00.952 [2024-11-04 14:46:30.745759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.952 "name": "raid_bdev1", 00:14:00.952 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:00.952 "strip_size_kb": 64, 00:14:00.952 "state": "online", 00:14:00.952 "raid_level": "raid0", 00:14:00.952 "superblock": true, 00:14:00.952 "num_base_bdevs": 2, 00:14:00.952 "num_base_bdevs_discovered": 2, 00:14:00.952 "num_base_bdevs_operational": 2, 00:14:00.952 "base_bdevs_list": [ 00:14:00.952 { 00:14:00.952 "name": "pt1", 00:14:00.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.952 "is_configured": true, 00:14:00.952 "data_offset": 2048, 00:14:00.952 "data_size": 63488 00:14:00.952 }, 00:14:00.952 { 00:14:00.952 "name": "pt2", 00:14:00.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.952 "is_configured": true, 00:14:00.952 "data_offset": 2048, 00:14:00.952 "data_size": 63488 00:14:00.952 } 00:14:00.952 ] 00:14:00.952 }' 00:14:00.952 14:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.953 14:46:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.521 [2024-11-04 14:46:31.254649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.521 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.521 "name": "raid_bdev1", 00:14:01.521 "aliases": [ 00:14:01.521 "2cb62dac-e56c-436a-8d08-24d81f11c0e2" 00:14:01.521 ], 00:14:01.521 "product_name": "Raid Volume", 00:14:01.521 "block_size": 512, 00:14:01.521 "num_blocks": 126976, 00:14:01.521 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:01.521 "assigned_rate_limits": { 00:14:01.521 "rw_ios_per_sec": 0, 00:14:01.521 "rw_mbytes_per_sec": 0, 00:14:01.521 "r_mbytes_per_sec": 0, 00:14:01.521 "w_mbytes_per_sec": 0 00:14:01.521 }, 00:14:01.522 "claimed": false, 00:14:01.522 "zoned": false, 00:14:01.522 "supported_io_types": { 00:14:01.522 "read": true, 00:14:01.522 "write": true, 00:14:01.522 "unmap": true, 00:14:01.522 "flush": true, 00:14:01.522 "reset": true, 00:14:01.522 "nvme_admin": false, 00:14:01.522 "nvme_io": false, 00:14:01.522 "nvme_io_md": false, 00:14:01.522 "write_zeroes": true, 00:14:01.522 "zcopy": false, 00:14:01.522 "get_zone_info": false, 00:14:01.522 "zone_management": false, 00:14:01.522 "zone_append": false, 00:14:01.522 "compare": false, 00:14:01.522 "compare_and_write": false, 00:14:01.522 "abort": false, 00:14:01.522 "seek_hole": false, 00:14:01.522 "seek_data": false, 00:14:01.522 "copy": false, 00:14:01.522 "nvme_iov_md": false 00:14:01.522 }, 00:14:01.522 "memory_domains": [ 00:14:01.522 { 00:14:01.522 "dma_device_id": "system", 00:14:01.522 "dma_device_type": 1 00:14:01.522 }, 00:14:01.522 { 00:14:01.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.522 "dma_device_type": 2 00:14:01.522 }, 00:14:01.522 { 00:14:01.522 "dma_device_id": "system", 00:14:01.522 "dma_device_type": 1 00:14:01.522 }, 00:14:01.522 { 00:14:01.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.522 "dma_device_type": 2 00:14:01.522 } 00:14:01.522 ], 00:14:01.522 "driver_specific": { 00:14:01.522 "raid": { 00:14:01.522 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:01.522 "strip_size_kb": 64, 00:14:01.522 "state": "online", 00:14:01.522 "raid_level": "raid0", 00:14:01.522 "superblock": true, 00:14:01.522 "num_base_bdevs": 2, 00:14:01.522 "num_base_bdevs_discovered": 2, 00:14:01.522 "num_base_bdevs_operational": 2, 00:14:01.522 "base_bdevs_list": [ 00:14:01.522 { 00:14:01.522 "name": "pt1", 00:14:01.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.522 "is_configured": true, 00:14:01.522 "data_offset": 2048, 00:14:01.522 "data_size": 63488 00:14:01.522 }, 00:14:01.522 { 00:14:01.522 "name": "pt2", 00:14:01.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.522 "is_configured": true, 00:14:01.522 "data_offset": 2048, 00:14:01.522 "data_size": 63488 00:14:01.522 } 00:14:01.522 ] 00:14:01.522 } 00:14:01.522 } 00:14:01.522 }' 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:01.522 pt2' 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.522 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.781 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.782 [2024-11-04 14:46:31.506739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2cb62dac-e56c-436a-8d08-24d81f11c0e2 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2cb62dac-e56c-436a-8d08-24d81f11c0e2 ']' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.782 [2024-11-04 14:46:31.554200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.782 [2024-11-04 14:46:31.554513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.782 [2024-11-04 14:46:31.554690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.782 [2024-11-04 14:46:31.554775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.782 [2024-11-04 14:46:31.554799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:01.782 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.041 [2024-11-04 14:46:31.678329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:02.041 [2024-11-04 14:46:31.681156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:02.041 [2024-11-04 14:46:31.681275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:02.041 [2024-11-04 14:46:31.681395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:02.041 [2024-11-04 14:46:31.681424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.041 [2024-11-04 14:46:31.681444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:02.041 request: 00:14:02.041 { 00:14:02.041 "name": "raid_bdev1", 00:14:02.041 "raid_level": "raid0", 00:14:02.041 "base_bdevs": [ 00:14:02.041 "malloc1", 00:14:02.041 "malloc2" 00:14:02.041 ], 00:14:02.041 "strip_size_kb": 64, 00:14:02.041 "superblock": false, 00:14:02.041 "method": "bdev_raid_create", 00:14:02.041 "req_id": 1 00:14:02.041 } 00:14:02.041 Got JSON-RPC error response 00:14:02.041 response: 00:14:02.041 { 00:14:02.041 "code": -17, 00:14:02.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:02.041 } 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.041 [2024-11-04 14:46:31.746399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.041 [2024-11-04 14:46:31.746739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.041 [2024-11-04 14:46:31.746819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.041 [2024-11-04 14:46:31.747050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.041 [2024-11-04 14:46:31.750451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.041 [2024-11-04 14:46:31.750614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.041 [2024-11-04 14:46:31.750864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:02.041 [2024-11-04 14:46:31.751079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.041 pt1 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.041 "name": "raid_bdev1", 00:14:02.041 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:02.041 "strip_size_kb": 64, 00:14:02.041 "state": "configuring", 00:14:02.041 "raid_level": "raid0", 00:14:02.041 "superblock": true, 00:14:02.041 "num_base_bdevs": 2, 00:14:02.041 "num_base_bdevs_discovered": 1, 00:14:02.041 "num_base_bdevs_operational": 2, 00:14:02.041 "base_bdevs_list": [ 00:14:02.041 { 00:14:02.041 "name": "pt1", 00:14:02.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.041 "is_configured": true, 00:14:02.041 "data_offset": 2048, 00:14:02.041 "data_size": 63488 00:14:02.041 }, 00:14:02.041 { 00:14:02.041 "name": null, 00:14:02.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.041 "is_configured": false, 00:14:02.041 "data_offset": 2048, 00:14:02.041 "data_size": 63488 00:14:02.041 } 00:14:02.041 ] 00:14:02.041 }' 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.041 14:46:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.610 [2024-11-04 14:46:32.275179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.610 [2024-11-04 14:46:32.275574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.610 [2024-11-04 14:46:32.275749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:02.610 [2024-11-04 14:46:32.275787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.610 [2024-11-04 14:46:32.276494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.610 [2024-11-04 14:46:32.276536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.610 [2024-11-04 14:46:32.276661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:02.610 [2024-11-04 14:46:32.276700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.610 [2024-11-04 14:46:32.276869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:02.610 [2024-11-04 14:46:32.276891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.610 [2024-11-04 14:46:32.277199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:02.610 [2024-11-04 14:46:32.277438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:02.610 [2024-11-04 14:46:32.277455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:02.610 [2024-11-04 14:46:32.277630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.610 pt2 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.610 "name": "raid_bdev1", 00:14:02.610 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:02.610 "strip_size_kb": 64, 00:14:02.610 "state": "online", 00:14:02.610 "raid_level": "raid0", 00:14:02.610 "superblock": true, 00:14:02.610 "num_base_bdevs": 2, 00:14:02.610 "num_base_bdevs_discovered": 2, 00:14:02.610 "num_base_bdevs_operational": 2, 00:14:02.610 "base_bdevs_list": [ 00:14:02.610 { 00:14:02.610 "name": "pt1", 00:14:02.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.610 "is_configured": true, 00:14:02.610 "data_offset": 2048, 00:14:02.610 "data_size": 63488 00:14:02.610 }, 00:14:02.610 { 00:14:02.610 "name": "pt2", 00:14:02.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.610 "is_configured": true, 00:14:02.610 "data_offset": 2048, 00:14:02.610 "data_size": 63488 00:14:02.610 } 00:14:02.610 ] 00:14:02.610 }' 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.610 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 [2024-11-04 14:46:32.815730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.177 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.177 "name": "raid_bdev1", 00:14:03.177 "aliases": [ 00:14:03.177 "2cb62dac-e56c-436a-8d08-24d81f11c0e2" 00:14:03.177 ], 00:14:03.177 "product_name": "Raid Volume", 00:14:03.177 "block_size": 512, 00:14:03.177 "num_blocks": 126976, 00:14:03.178 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:03.178 "assigned_rate_limits": { 00:14:03.178 "rw_ios_per_sec": 0, 00:14:03.178 "rw_mbytes_per_sec": 0, 00:14:03.178 "r_mbytes_per_sec": 0, 00:14:03.178 "w_mbytes_per_sec": 0 00:14:03.178 }, 00:14:03.178 "claimed": false, 00:14:03.178 "zoned": false, 00:14:03.178 "supported_io_types": { 00:14:03.178 "read": true, 00:14:03.178 "write": true, 00:14:03.178 "unmap": true, 00:14:03.178 "flush": true, 00:14:03.178 "reset": true, 00:14:03.178 "nvme_admin": false, 00:14:03.178 "nvme_io": false, 00:14:03.178 "nvme_io_md": false, 00:14:03.178 "write_zeroes": true, 00:14:03.178 "zcopy": false, 00:14:03.178 "get_zone_info": false, 00:14:03.178 "zone_management": false, 00:14:03.178 "zone_append": false, 00:14:03.178 "compare": false, 00:14:03.178 "compare_and_write": false, 00:14:03.178 "abort": false, 00:14:03.178 "seek_hole": false, 00:14:03.178 "seek_data": false, 00:14:03.178 "copy": false, 00:14:03.178 "nvme_iov_md": false 00:14:03.178 }, 00:14:03.178 "memory_domains": [ 00:14:03.178 { 00:14:03.178 "dma_device_id": "system", 00:14:03.178 "dma_device_type": 1 00:14:03.178 }, 00:14:03.178 { 00:14:03.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.178 "dma_device_type": 2 00:14:03.178 }, 00:14:03.178 { 00:14:03.178 "dma_device_id": "system", 00:14:03.178 "dma_device_type": 1 00:14:03.178 }, 00:14:03.178 { 00:14:03.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.178 "dma_device_type": 2 00:14:03.178 } 00:14:03.178 ], 00:14:03.178 "driver_specific": { 00:14:03.178 "raid": { 00:14:03.178 "uuid": "2cb62dac-e56c-436a-8d08-24d81f11c0e2", 00:14:03.178 "strip_size_kb": 64, 00:14:03.178 "state": "online", 00:14:03.178 "raid_level": "raid0", 00:14:03.178 "superblock": true, 00:14:03.178 "num_base_bdevs": 2, 00:14:03.178 "num_base_bdevs_discovered": 2, 00:14:03.178 "num_base_bdevs_operational": 2, 00:14:03.178 "base_bdevs_list": [ 00:14:03.178 { 00:14:03.178 "name": "pt1", 00:14:03.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.178 "is_configured": true, 00:14:03.178 "data_offset": 2048, 00:14:03.178 "data_size": 63488 00:14:03.178 }, 00:14:03.178 { 00:14:03.178 "name": "pt2", 00:14:03.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.178 "is_configured": true, 00:14:03.178 "data_offset": 2048, 00:14:03.178 "data_size": 63488 00:14:03.178 } 00:14:03.178 ] 00:14:03.178 } 00:14:03.178 } 00:14:03.178 }' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:03.178 pt2' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.178 14:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.178 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.178 [2024-11-04 14:46:33.055684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2cb62dac-e56c-436a-8d08-24d81f11c0e2 '!=' 2cb62dac-e56c-436a-8d08-24d81f11c0e2 ']' 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61235 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61235 ']' 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61235 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61235 00:14:03.436 killing process with pid 61235 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61235' 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61235 00:14:03.436 14:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61235 00:14:03.436 [2024-11-04 14:46:33.135335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.436 [2024-11-04 14:46:33.135511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.436 [2024-11-04 14:46:33.135603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.436 [2024-11-04 14:46:33.135625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:03.694 [2024-11-04 14:46:33.339156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.628 ************************************ 00:14:04.628 END TEST raid_superblock_test 00:14:04.628 ************************************ 00:14:04.628 14:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:04.628 00:14:04.628 real 0m5.056s 00:14:04.628 user 0m7.324s 00:14:04.628 sys 0m0.782s 00:14:04.628 14:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.628 14:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.887 14:46:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:04.887 14:46:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:04.887 14:46:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.887 14:46:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.887 ************************************ 00:14:04.887 START TEST raid_read_error_test 00:14:04.887 ************************************ 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.woOaec7UoT 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61452 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61452 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61452 ']' 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:04.887 14:46:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.887 [2024-11-04 14:46:34.677250] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:04.887 [2024-11-04 14:46:34.677455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61452 ] 00:14:05.146 [2024-11-04 14:46:34.864326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.146 [2024-11-04 14:46:35.022875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.405 [2024-11-04 14:46:35.279509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.405 [2024-11-04 14:46:35.279572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 BaseBdev1_malloc 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 true 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 [2024-11-04 14:46:35.754489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:05.972 [2024-11-04 14:46:35.754813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.972 [2024-11-04 14:46:35.754855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:05.972 [2024-11-04 14:46:35.754877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.972 [2024-11-04 14:46:35.758118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.972 BaseBdev1 00:14:05.972 [2024-11-04 14:46:35.758301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 BaseBdev2_malloc 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 true 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 [2024-11-04 14:46:35.826455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:05.972 [2024-11-04 14:46:35.826852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.972 [2024-11-04 14:46:35.826923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:05.972 [2024-11-04 14:46:35.827097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.972 [2024-11-04 14:46:35.830308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.972 [2024-11-04 14:46:35.830472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.972 BaseBdev2 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.972 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.972 [2024-11-04 14:46:35.834835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.972 [2024-11-04 14:46:35.837778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.972 [2024-11-04 14:46:35.838089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.973 [2024-11-04 14:46:35.838123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:05.973 [2024-11-04 14:46:35.838436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:05.973 [2024-11-04 14:46:35.838714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.973 [2024-11-04 14:46:35.838733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:05.973 [2024-11-04 14:46:35.839014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.973 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.231 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.231 "name": "raid_bdev1", 00:14:06.231 "uuid": "4e02a46c-d44f-4e9b-af7a-2380f05ab834", 00:14:06.231 "strip_size_kb": 64, 00:14:06.231 "state": "online", 00:14:06.231 "raid_level": "raid0", 00:14:06.231 "superblock": true, 00:14:06.231 "num_base_bdevs": 2, 00:14:06.231 "num_base_bdevs_discovered": 2, 00:14:06.232 "num_base_bdevs_operational": 2, 00:14:06.232 "base_bdevs_list": [ 00:14:06.232 { 00:14:06.232 "name": "BaseBdev1", 00:14:06.232 "uuid": "98cb30c9-455e-5227-8a12-495ceb51b333", 00:14:06.232 "is_configured": true, 00:14:06.232 "data_offset": 2048, 00:14:06.232 "data_size": 63488 00:14:06.232 }, 00:14:06.232 { 00:14:06.232 "name": "BaseBdev2", 00:14:06.232 "uuid": "9f81fdab-9d50-5301-9691-4a6320433474", 00:14:06.232 "is_configured": true, 00:14:06.232 "data_offset": 2048, 00:14:06.232 "data_size": 63488 00:14:06.232 } 00:14:06.232 ] 00:14:06.232 }' 00:14:06.232 14:46:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.232 14:46:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.490 14:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:06.490 14:46:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:06.749 [2024-11-04 14:46:36.464937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:07.684 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:07.684 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.684 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.684 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.684 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.685 "name": "raid_bdev1", 00:14:07.685 "uuid": "4e02a46c-d44f-4e9b-af7a-2380f05ab834", 00:14:07.685 "strip_size_kb": 64, 00:14:07.685 "state": "online", 00:14:07.685 "raid_level": "raid0", 00:14:07.685 "superblock": true, 00:14:07.685 "num_base_bdevs": 2, 00:14:07.685 "num_base_bdevs_discovered": 2, 00:14:07.685 "num_base_bdevs_operational": 2, 00:14:07.685 "base_bdevs_list": [ 00:14:07.685 { 00:14:07.685 "name": "BaseBdev1", 00:14:07.685 "uuid": "98cb30c9-455e-5227-8a12-495ceb51b333", 00:14:07.685 "is_configured": true, 00:14:07.685 "data_offset": 2048, 00:14:07.685 "data_size": 63488 00:14:07.685 }, 00:14:07.685 { 00:14:07.685 "name": "BaseBdev2", 00:14:07.685 "uuid": "9f81fdab-9d50-5301-9691-4a6320433474", 00:14:07.685 "is_configured": true, 00:14:07.685 "data_offset": 2048, 00:14:07.685 "data_size": 63488 00:14:07.685 } 00:14:07.685 ] 00:14:07.685 }' 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.685 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.252 [2024-11-04 14:46:37.891268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.252 [2024-11-04 14:46:37.891613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.252 [2024-11-04 14:46:37.895336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.252 { 00:14:08.252 "results": [ 00:14:08.252 { 00:14:08.252 "job": "raid_bdev1", 00:14:08.252 "core_mask": "0x1", 00:14:08.252 "workload": "randrw", 00:14:08.252 "percentage": 50, 00:14:08.252 "status": "finished", 00:14:08.252 "queue_depth": 1, 00:14:08.252 "io_size": 131072, 00:14:08.252 "runtime": 1.423862, 00:14:08.252 "iops": 9746.731073657418, 00:14:08.252 "mibps": 1218.3413842071773, 00:14:08.252 "io_failed": 1, 00:14:08.252 "io_timeout": 0, 00:14:08.252 "avg_latency_us": 144.67845810216875, 00:14:08.252 "min_latency_us": 40.49454545454545, 00:14:08.252 "max_latency_us": 2040.5527272727272 00:14:08.252 } 00:14:08.252 ], 00:14:08.252 "core_count": 1 00:14:08.252 } 00:14:08.252 [2024-11-04 14:46:37.895581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.252 [2024-11-04 14:46:37.895644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.252 [2024-11-04 14:46:37.895666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61452 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61452 ']' 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61452 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61452 00:14:08.252 killing process with pid 61452 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61452' 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61452 00:14:08.252 [2024-11-04 14:46:37.937582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.252 14:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61452 00:14:08.252 [2024-11-04 14:46:38.073786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.woOaec7UoT 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:09.697 ************************************ 00:14:09.697 END TEST raid_read_error_test 00:14:09.697 ************************************ 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:09.697 00:14:09.697 real 0m4.719s 00:14:09.697 user 0m5.814s 00:14:09.697 sys 0m0.622s 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.697 14:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.697 14:46:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:09.697 14:46:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:09.697 14:46:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.697 14:46:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.697 ************************************ 00:14:09.697 START TEST raid_write_error_test 00:14:09.697 ************************************ 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RkAd6TYnBm 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61598 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61598 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61598 ']' 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:09.697 14:46:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.697 [2024-11-04 14:46:39.448408] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:09.697 [2024-11-04 14:46:39.449346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61598 ] 00:14:09.954 [2024-11-04 14:46:39.642879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.954 [2024-11-04 14:46:39.824373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.212 [2024-11-04 14:46:40.072752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.212 [2024-11-04 14:46:40.072818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 BaseBdev1_malloc 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 true 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 [2024-11-04 14:46:40.542718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:10.778 [2024-11-04 14:46:40.543090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.778 [2024-11-04 14:46:40.543141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:10.778 [2024-11-04 14:46:40.543164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.778 [2024-11-04 14:46:40.546325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.778 [2024-11-04 14:46:40.546377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.778 BaseBdev1 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 BaseBdev2_malloc 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 true 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 [2024-11-04 14:46:40.618113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:10.778 [2024-11-04 14:46:40.618536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.778 [2024-11-04 14:46:40.618580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:10.778 [2024-11-04 14:46:40.618600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.778 [2024-11-04 14:46:40.622064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.778 BaseBdev2 00:14:10.778 [2024-11-04 14:46:40.622308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.778 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.778 [2024-11-04 14:46:40.626676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.778 [2024-11-04 14:46:40.629584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.778 [2024-11-04 14:46:40.629881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:10.778 [2024-11-04 14:46:40.629908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:10.778 [2024-11-04 14:46:40.630313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:10.778 [2024-11-04 14:46:40.630630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:10.779 [2024-11-04 14:46:40.630654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:10.779 [2024-11-04 14:46:40.630976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.037 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.037 "name": "raid_bdev1", 00:14:11.037 "uuid": "ac6189ca-8910-4297-b9b9-9bd7017d1942", 00:14:11.037 "strip_size_kb": 64, 00:14:11.037 "state": "online", 00:14:11.037 "raid_level": "raid0", 00:14:11.037 "superblock": true, 00:14:11.037 "num_base_bdevs": 2, 00:14:11.037 "num_base_bdevs_discovered": 2, 00:14:11.037 "num_base_bdevs_operational": 2, 00:14:11.037 "base_bdevs_list": [ 00:14:11.037 { 00:14:11.037 "name": "BaseBdev1", 00:14:11.037 "uuid": "a1538d9c-dc09-5832-b7a8-ba02c480a60b", 00:14:11.037 "is_configured": true, 00:14:11.037 "data_offset": 2048, 00:14:11.037 "data_size": 63488 00:14:11.037 }, 00:14:11.037 { 00:14:11.037 "name": "BaseBdev2", 00:14:11.037 "uuid": "a18ceaad-69c3-5623-a118-5998ff6d810e", 00:14:11.037 "is_configured": true, 00:14:11.037 "data_offset": 2048, 00:14:11.037 "data_size": 63488 00:14:11.037 } 00:14:11.037 ] 00:14:11.037 }' 00:14:11.037 14:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.037 14:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.295 14:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:11.295 14:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:11.553 [2024-11-04 14:46:41.280798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.487 "name": "raid_bdev1", 00:14:12.487 "uuid": "ac6189ca-8910-4297-b9b9-9bd7017d1942", 00:14:12.487 "strip_size_kb": 64, 00:14:12.487 "state": "online", 00:14:12.487 "raid_level": "raid0", 00:14:12.487 "superblock": true, 00:14:12.487 "num_base_bdevs": 2, 00:14:12.487 "num_base_bdevs_discovered": 2, 00:14:12.487 "num_base_bdevs_operational": 2, 00:14:12.487 "base_bdevs_list": [ 00:14:12.487 { 00:14:12.487 "name": "BaseBdev1", 00:14:12.487 "uuid": "a1538d9c-dc09-5832-b7a8-ba02c480a60b", 00:14:12.487 "is_configured": true, 00:14:12.487 "data_offset": 2048, 00:14:12.487 "data_size": 63488 00:14:12.487 }, 00:14:12.487 { 00:14:12.487 "name": "BaseBdev2", 00:14:12.487 "uuid": "a18ceaad-69c3-5623-a118-5998ff6d810e", 00:14:12.487 "is_configured": true, 00:14:12.487 "data_offset": 2048, 00:14:12.487 "data_size": 63488 00:14:12.487 } 00:14:12.487 ] 00:14:12.487 }' 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.487 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.054 [2024-11-04 14:46:42.707543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.054 [2024-11-04 14:46:42.707874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.054 { 00:14:13.054 "results": [ 00:14:13.054 { 00:14:13.054 "job": "raid_bdev1", 00:14:13.054 "core_mask": "0x1", 00:14:13.054 "workload": "randrw", 00:14:13.054 "percentage": 50, 00:14:13.054 "status": "finished", 00:14:13.054 "queue_depth": 1, 00:14:13.054 "io_size": 131072, 00:14:13.054 "runtime": 1.424285, 00:14:13.054 "iops": 9566.905499952607, 00:14:13.054 "mibps": 1195.8631874940759, 00:14:13.054 "io_failed": 1, 00:14:13.054 "io_timeout": 0, 00:14:13.054 "avg_latency_us": 146.80696638358341, 00:14:13.054 "min_latency_us": 39.56363636363636, 00:14:13.054 "max_latency_us": 1899.0545454545454 00:14:13.054 } 00:14:13.054 ], 00:14:13.054 "core_count": 1 00:14:13.054 } 00:14:13.054 [2024-11-04 14:46:42.711402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.054 [2024-11-04 14:46:42.711520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.054 [2024-11-04 14:46:42.711589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.054 [2024-11-04 14:46:42.711609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61598 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61598 ']' 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61598 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61598 00:14:13.054 killing process with pid 61598 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61598' 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61598 00:14:13.054 [2024-11-04 14:46:42.752687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.054 14:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61598 00:14:13.054 [2024-11-04 14:46:42.890971] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RkAd6TYnBm 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:14.428 00:14:14.428 real 0m4.839s 00:14:14.428 user 0m5.971s 00:14:14.428 sys 0m0.662s 00:14:14.428 ************************************ 00:14:14.428 END TEST raid_write_error_test 00:14:14.428 ************************************ 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.428 14:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 14:46:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:14.428 14:46:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:14.428 14:46:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:14.428 14:46:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.428 14:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.428 ************************************ 00:14:14.428 START TEST raid_state_function_test 00:14:14.428 ************************************ 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:14.428 Process raid pid: 61741 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61741 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61741' 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61741 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61741 ']' 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.428 14:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.686 [2024-11-04 14:46:44.334938] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:14.686 [2024-11-04 14:46:44.335105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.686 [2024-11-04 14:46:44.520144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.945 [2024-11-04 14:46:44.680190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.203 [2024-11-04 14:46:44.929571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.203 [2024-11-04 14:46:44.929619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.770 [2024-11-04 14:46:45.381005] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.770 [2024-11-04 14:46:45.381094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.770 [2024-11-04 14:46:45.381113] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.770 [2024-11-04 14:46:45.381131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.770 "name": "Existed_Raid", 00:14:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.770 "strip_size_kb": 64, 00:14:15.770 "state": "configuring", 00:14:15.770 "raid_level": "concat", 00:14:15.770 "superblock": false, 00:14:15.770 "num_base_bdevs": 2, 00:14:15.770 "num_base_bdevs_discovered": 0, 00:14:15.770 "num_base_bdevs_operational": 2, 00:14:15.770 "base_bdevs_list": [ 00:14:15.770 { 00:14:15.770 "name": "BaseBdev1", 00:14:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.770 "is_configured": false, 00:14:15.770 "data_offset": 0, 00:14:15.770 "data_size": 0 00:14:15.770 }, 00:14:15.770 { 00:14:15.770 "name": "BaseBdev2", 00:14:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.770 "is_configured": false, 00:14:15.770 "data_offset": 0, 00:14:15.770 "data_size": 0 00:14:15.770 } 00:14:15.770 ] 00:14:15.770 }' 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.770 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 [2024-11-04 14:46:45.901132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.028 [2024-11-04 14:46:45.901206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 [2024-11-04 14:46:45.909043] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.028 [2024-11-04 14:46:45.909360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.028 [2024-11-04 14:46:45.909499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.028 [2024-11-04 14:46:45.909566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.028 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 [2024-11-04 14:46:45.961480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.287 BaseBdev1 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 [ 00:14:16.287 { 00:14:16.287 "name": "BaseBdev1", 00:14:16.287 "aliases": [ 00:14:16.287 "7e687b2b-d14f-4044-ae1f-e60864c6c700" 00:14:16.287 ], 00:14:16.287 "product_name": "Malloc disk", 00:14:16.287 "block_size": 512, 00:14:16.287 "num_blocks": 65536, 00:14:16.287 "uuid": "7e687b2b-d14f-4044-ae1f-e60864c6c700", 00:14:16.287 "assigned_rate_limits": { 00:14:16.287 "rw_ios_per_sec": 0, 00:14:16.287 "rw_mbytes_per_sec": 0, 00:14:16.287 "r_mbytes_per_sec": 0, 00:14:16.287 "w_mbytes_per_sec": 0 00:14:16.287 }, 00:14:16.287 "claimed": true, 00:14:16.287 "claim_type": "exclusive_write", 00:14:16.287 "zoned": false, 00:14:16.287 "supported_io_types": { 00:14:16.287 "read": true, 00:14:16.287 "write": true, 00:14:16.287 "unmap": true, 00:14:16.287 "flush": true, 00:14:16.287 "reset": true, 00:14:16.287 "nvme_admin": false, 00:14:16.287 "nvme_io": false, 00:14:16.287 "nvme_io_md": false, 00:14:16.287 "write_zeroes": true, 00:14:16.287 "zcopy": true, 00:14:16.287 "get_zone_info": false, 00:14:16.287 "zone_management": false, 00:14:16.287 "zone_append": false, 00:14:16.287 "compare": false, 00:14:16.287 "compare_and_write": false, 00:14:16.287 "abort": true, 00:14:16.287 "seek_hole": false, 00:14:16.287 "seek_data": false, 00:14:16.287 "copy": true, 00:14:16.287 "nvme_iov_md": false 00:14:16.287 }, 00:14:16.287 "memory_domains": [ 00:14:16.287 { 00:14:16.287 "dma_device_id": "system", 00:14:16.287 "dma_device_type": 1 00:14:16.287 }, 00:14:16.287 { 00:14:16.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.287 "dma_device_type": 2 00:14:16.287 } 00:14:16.287 ], 00:14:16.287 "driver_specific": {} 00:14:16.287 } 00:14:16.287 ] 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.287 14:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.287 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.287 "name": "Existed_Raid", 00:14:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.287 "strip_size_kb": 64, 00:14:16.287 "state": "configuring", 00:14:16.287 "raid_level": "concat", 00:14:16.287 "superblock": false, 00:14:16.287 "num_base_bdevs": 2, 00:14:16.287 "num_base_bdevs_discovered": 1, 00:14:16.287 "num_base_bdevs_operational": 2, 00:14:16.287 "base_bdevs_list": [ 00:14:16.287 { 00:14:16.287 "name": "BaseBdev1", 00:14:16.287 "uuid": "7e687b2b-d14f-4044-ae1f-e60864c6c700", 00:14:16.287 "is_configured": true, 00:14:16.287 "data_offset": 0, 00:14:16.287 "data_size": 65536 00:14:16.287 }, 00:14:16.287 { 00:14:16.287 "name": "BaseBdev2", 00:14:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.287 "is_configured": false, 00:14:16.287 "data_offset": 0, 00:14:16.287 "data_size": 0 00:14:16.287 } 00:14:16.287 ] 00:14:16.287 }' 00:14:16.287 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.287 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.854 [2024-11-04 14:46:46.557749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.854 [2024-11-04 14:46:46.557869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.854 [2024-11-04 14:46:46.565772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.854 [2024-11-04 14:46:46.568638] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.854 [2024-11-04 14:46:46.568859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.854 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.854 "name": "Existed_Raid", 00:14:16.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.854 "strip_size_kb": 64, 00:14:16.854 "state": "configuring", 00:14:16.854 "raid_level": "concat", 00:14:16.854 "superblock": false, 00:14:16.854 "num_base_bdevs": 2, 00:14:16.854 "num_base_bdevs_discovered": 1, 00:14:16.854 "num_base_bdevs_operational": 2, 00:14:16.854 "base_bdevs_list": [ 00:14:16.854 { 00:14:16.854 "name": "BaseBdev1", 00:14:16.854 "uuid": "7e687b2b-d14f-4044-ae1f-e60864c6c700", 00:14:16.854 "is_configured": true, 00:14:16.854 "data_offset": 0, 00:14:16.854 "data_size": 65536 00:14:16.855 }, 00:14:16.855 { 00:14:16.855 "name": "BaseBdev2", 00:14:16.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.855 "is_configured": false, 00:14:16.855 "data_offset": 0, 00:14:16.855 "data_size": 0 00:14:16.855 } 00:14:16.855 ] 00:14:16.855 }' 00:14:16.855 14:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.855 14:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.421 [2024-11-04 14:46:47.156790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.421 BaseBdev2 00:14:17.421 [2024-11-04 14:46:47.157138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:17.421 [2024-11-04 14:46:47.157171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:17.421 [2024-11-04 14:46:47.157581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:17.421 [2024-11-04 14:46:47.157845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:17.421 [2024-11-04 14:46:47.157868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:17.421 [2024-11-04 14:46:47.158246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.421 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.421 [ 00:14:17.421 { 00:14:17.421 "name": "BaseBdev2", 00:14:17.421 "aliases": [ 00:14:17.421 "d50485a7-9e2c-4e25-bdfd-e77bda6309c5" 00:14:17.421 ], 00:14:17.421 "product_name": "Malloc disk", 00:14:17.421 "block_size": 512, 00:14:17.421 "num_blocks": 65536, 00:14:17.421 "uuid": "d50485a7-9e2c-4e25-bdfd-e77bda6309c5", 00:14:17.421 "assigned_rate_limits": { 00:14:17.421 "rw_ios_per_sec": 0, 00:14:17.421 "rw_mbytes_per_sec": 0, 00:14:17.421 "r_mbytes_per_sec": 0, 00:14:17.421 "w_mbytes_per_sec": 0 00:14:17.421 }, 00:14:17.421 "claimed": true, 00:14:17.421 "claim_type": "exclusive_write", 00:14:17.421 "zoned": false, 00:14:17.421 "supported_io_types": { 00:14:17.421 "read": true, 00:14:17.421 "write": true, 00:14:17.421 "unmap": true, 00:14:17.421 "flush": true, 00:14:17.421 "reset": true, 00:14:17.421 "nvme_admin": false, 00:14:17.421 "nvme_io": false, 00:14:17.421 "nvme_io_md": false, 00:14:17.421 "write_zeroes": true, 00:14:17.421 "zcopy": true, 00:14:17.421 "get_zone_info": false, 00:14:17.421 "zone_management": false, 00:14:17.421 "zone_append": false, 00:14:17.421 "compare": false, 00:14:17.421 "compare_and_write": false, 00:14:17.421 "abort": true, 00:14:17.421 "seek_hole": false, 00:14:17.421 "seek_data": false, 00:14:17.421 "copy": true, 00:14:17.421 "nvme_iov_md": false 00:14:17.421 }, 00:14:17.421 "memory_domains": [ 00:14:17.421 { 00:14:17.421 "dma_device_id": "system", 00:14:17.421 "dma_device_type": 1 00:14:17.421 }, 00:14:17.421 { 00:14:17.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.421 "dma_device_type": 2 00:14:17.421 } 00:14:17.421 ], 00:14:17.421 "driver_specific": {} 00:14:17.421 } 00:14:17.421 ] 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.422 "name": "Existed_Raid", 00:14:17.422 "uuid": "7eb38c92-4f86-461c-8797-131d6a435a73", 00:14:17.422 "strip_size_kb": 64, 00:14:17.422 "state": "online", 00:14:17.422 "raid_level": "concat", 00:14:17.422 "superblock": false, 00:14:17.422 "num_base_bdevs": 2, 00:14:17.422 "num_base_bdevs_discovered": 2, 00:14:17.422 "num_base_bdevs_operational": 2, 00:14:17.422 "base_bdevs_list": [ 00:14:17.422 { 00:14:17.422 "name": "BaseBdev1", 00:14:17.422 "uuid": "7e687b2b-d14f-4044-ae1f-e60864c6c700", 00:14:17.422 "is_configured": true, 00:14:17.422 "data_offset": 0, 00:14:17.422 "data_size": 65536 00:14:17.422 }, 00:14:17.422 { 00:14:17.422 "name": "BaseBdev2", 00:14:17.422 "uuid": "d50485a7-9e2c-4e25-bdfd-e77bda6309c5", 00:14:17.422 "is_configured": true, 00:14:17.422 "data_offset": 0, 00:14:17.422 "data_size": 65536 00:14:17.422 } 00:14:17.422 ] 00:14:17.422 }' 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.422 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.988 [2024-11-04 14:46:47.701500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.988 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:17.988 "name": "Existed_Raid", 00:14:17.988 "aliases": [ 00:14:17.988 "7eb38c92-4f86-461c-8797-131d6a435a73" 00:14:17.988 ], 00:14:17.988 "product_name": "Raid Volume", 00:14:17.988 "block_size": 512, 00:14:17.988 "num_blocks": 131072, 00:14:17.988 "uuid": "7eb38c92-4f86-461c-8797-131d6a435a73", 00:14:17.988 "assigned_rate_limits": { 00:14:17.988 "rw_ios_per_sec": 0, 00:14:17.988 "rw_mbytes_per_sec": 0, 00:14:17.988 "r_mbytes_per_sec": 0, 00:14:17.988 "w_mbytes_per_sec": 0 00:14:17.988 }, 00:14:17.988 "claimed": false, 00:14:17.988 "zoned": false, 00:14:17.988 "supported_io_types": { 00:14:17.988 "read": true, 00:14:17.988 "write": true, 00:14:17.988 "unmap": true, 00:14:17.988 "flush": true, 00:14:17.988 "reset": true, 00:14:17.988 "nvme_admin": false, 00:14:17.988 "nvme_io": false, 00:14:17.988 "nvme_io_md": false, 00:14:17.988 "write_zeroes": true, 00:14:17.988 "zcopy": false, 00:14:17.988 "get_zone_info": false, 00:14:17.988 "zone_management": false, 00:14:17.988 "zone_append": false, 00:14:17.988 "compare": false, 00:14:17.988 "compare_and_write": false, 00:14:17.988 "abort": false, 00:14:17.988 "seek_hole": false, 00:14:17.988 "seek_data": false, 00:14:17.988 "copy": false, 00:14:17.988 "nvme_iov_md": false 00:14:17.988 }, 00:14:17.988 "memory_domains": [ 00:14:17.988 { 00:14:17.988 "dma_device_id": "system", 00:14:17.988 "dma_device_type": 1 00:14:17.988 }, 00:14:17.988 { 00:14:17.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.988 "dma_device_type": 2 00:14:17.988 }, 00:14:17.989 { 00:14:17.989 "dma_device_id": "system", 00:14:17.989 "dma_device_type": 1 00:14:17.989 }, 00:14:17.989 { 00:14:17.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.989 "dma_device_type": 2 00:14:17.989 } 00:14:17.989 ], 00:14:17.989 "driver_specific": { 00:14:17.989 "raid": { 00:14:17.989 "uuid": "7eb38c92-4f86-461c-8797-131d6a435a73", 00:14:17.989 "strip_size_kb": 64, 00:14:17.989 "state": "online", 00:14:17.989 "raid_level": "concat", 00:14:17.989 "superblock": false, 00:14:17.989 "num_base_bdevs": 2, 00:14:17.989 "num_base_bdevs_discovered": 2, 00:14:17.989 "num_base_bdevs_operational": 2, 00:14:17.989 "base_bdevs_list": [ 00:14:17.989 { 00:14:17.989 "name": "BaseBdev1", 00:14:17.989 "uuid": "7e687b2b-d14f-4044-ae1f-e60864c6c700", 00:14:17.989 "is_configured": true, 00:14:17.989 "data_offset": 0, 00:14:17.989 "data_size": 65536 00:14:17.989 }, 00:14:17.989 { 00:14:17.989 "name": "BaseBdev2", 00:14:17.989 "uuid": "d50485a7-9e2c-4e25-bdfd-e77bda6309c5", 00:14:17.989 "is_configured": true, 00:14:17.989 "data_offset": 0, 00:14:17.989 "data_size": 65536 00:14:17.989 } 00:14:17.989 ] 00:14:17.989 } 00:14:17.989 } 00:14:17.989 }' 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:17.989 BaseBdev2' 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.989 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.247 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.247 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.247 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.247 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.248 14:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.248 [2024-11-04 14:46:47.961126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.248 [2024-11-04 14:46:47.961434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.248 [2024-11-04 14:46:47.961607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.248 "name": "Existed_Raid", 00:14:18.248 "uuid": "7eb38c92-4f86-461c-8797-131d6a435a73", 00:14:18.248 "strip_size_kb": 64, 00:14:18.248 "state": "offline", 00:14:18.248 "raid_level": "concat", 00:14:18.248 "superblock": false, 00:14:18.248 "num_base_bdevs": 2, 00:14:18.248 "num_base_bdevs_discovered": 1, 00:14:18.248 "num_base_bdevs_operational": 1, 00:14:18.248 "base_bdevs_list": [ 00:14:18.248 { 00:14:18.248 "name": null, 00:14:18.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.248 "is_configured": false, 00:14:18.248 "data_offset": 0, 00:14:18.248 "data_size": 65536 00:14:18.248 }, 00:14:18.248 { 00:14:18.248 "name": "BaseBdev2", 00:14:18.248 "uuid": "d50485a7-9e2c-4e25-bdfd-e77bda6309c5", 00:14:18.248 "is_configured": true, 00:14:18.248 "data_offset": 0, 00:14:18.248 "data_size": 65536 00:14:18.248 } 00:14:18.248 ] 00:14:18.248 }' 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.248 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.815 [2024-11-04 14:46:48.602773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.815 [2024-11-04 14:46:48.603084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.815 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61741 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61741 ']' 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61741 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61741 00:14:19.073 killing process with pid 61741 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61741' 00:14:19.073 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61741 00:14:19.073 [2024-11-04 14:46:48.805418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.074 14:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61741 00:14:19.074 [2024-11-04 14:46:48.821436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.030 14:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:20.030 00:14:20.030 real 0m5.688s 00:14:20.030 user 0m8.568s 00:14:20.030 sys 0m0.831s 00:14:20.030 14:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:20.030 14:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.030 ************************************ 00:14:20.030 END TEST raid_state_function_test 00:14:20.030 ************************************ 00:14:20.289 14:46:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:20.289 14:46:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:20.289 14:46:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:20.289 14:46:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.289 ************************************ 00:14:20.289 START TEST raid_state_function_test_sb 00:14:20.289 ************************************ 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.289 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62000 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:20.290 Process raid pid: 62000 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62000' 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62000 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62000 ']' 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:20.290 14:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.290 [2024-11-04 14:46:50.066296] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:20.290 [2024-11-04 14:46:50.066480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.549 [2024-11-04 14:46:50.241339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.549 [2024-11-04 14:46:50.385707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.808 [2024-11-04 14:46:50.627535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.808 [2024-11-04 14:46:50.627608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.399 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.399 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 [2024-11-04 14:46:51.020513] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.400 [2024-11-04 14:46:51.020876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.400 [2024-11-04 14:46:51.020906] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.400 [2024-11-04 14:46:51.020925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.400 "name": "Existed_Raid", 00:14:21.400 "uuid": "999022ba-2766-4c7e-b79b-8ffae053dd0a", 00:14:21.400 "strip_size_kb": 64, 00:14:21.400 "state": "configuring", 00:14:21.400 "raid_level": "concat", 00:14:21.400 "superblock": true, 00:14:21.400 "num_base_bdevs": 2, 00:14:21.400 "num_base_bdevs_discovered": 0, 00:14:21.400 "num_base_bdevs_operational": 2, 00:14:21.400 "base_bdevs_list": [ 00:14:21.400 { 00:14:21.400 "name": "BaseBdev1", 00:14:21.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.400 "is_configured": false, 00:14:21.400 "data_offset": 0, 00:14:21.400 "data_size": 0 00:14:21.400 }, 00:14:21.400 { 00:14:21.400 "name": "BaseBdev2", 00:14:21.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.400 "is_configured": false, 00:14:21.400 "data_offset": 0, 00:14:21.400 "data_size": 0 00:14:21.400 } 00:14:21.400 ] 00:14:21.400 }' 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.400 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.659 [2024-11-04 14:46:51.536615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.659 [2024-11-04 14:46:51.536881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.659 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.660 [2024-11-04 14:46:51.544576] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.660 [2024-11-04 14:46:51.544797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.660 [2024-11-04 14:46:51.544926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.660 [2024-11-04 14:46:51.544989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.660 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.660 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.660 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.660 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.919 [2024-11-04 14:46:51.595057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.919 BaseBdev1 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.919 [ 00:14:21.919 { 00:14:21.919 "name": "BaseBdev1", 00:14:21.919 "aliases": [ 00:14:21.919 "49200f5a-2da7-4301-9a7b-9b891d506e28" 00:14:21.919 ], 00:14:21.919 "product_name": "Malloc disk", 00:14:21.919 "block_size": 512, 00:14:21.919 "num_blocks": 65536, 00:14:21.919 "uuid": "49200f5a-2da7-4301-9a7b-9b891d506e28", 00:14:21.919 "assigned_rate_limits": { 00:14:21.919 "rw_ios_per_sec": 0, 00:14:21.919 "rw_mbytes_per_sec": 0, 00:14:21.919 "r_mbytes_per_sec": 0, 00:14:21.919 "w_mbytes_per_sec": 0 00:14:21.919 }, 00:14:21.919 "claimed": true, 00:14:21.919 "claim_type": "exclusive_write", 00:14:21.919 "zoned": false, 00:14:21.919 "supported_io_types": { 00:14:21.919 "read": true, 00:14:21.919 "write": true, 00:14:21.919 "unmap": true, 00:14:21.919 "flush": true, 00:14:21.919 "reset": true, 00:14:21.919 "nvme_admin": false, 00:14:21.919 "nvme_io": false, 00:14:21.919 "nvme_io_md": false, 00:14:21.919 "write_zeroes": true, 00:14:21.919 "zcopy": true, 00:14:21.919 "get_zone_info": false, 00:14:21.919 "zone_management": false, 00:14:21.919 "zone_append": false, 00:14:21.919 "compare": false, 00:14:21.919 "compare_and_write": false, 00:14:21.919 "abort": true, 00:14:21.919 "seek_hole": false, 00:14:21.919 "seek_data": false, 00:14:21.919 "copy": true, 00:14:21.919 "nvme_iov_md": false 00:14:21.919 }, 00:14:21.919 "memory_domains": [ 00:14:21.919 { 00:14:21.919 "dma_device_id": "system", 00:14:21.919 "dma_device_type": 1 00:14:21.919 }, 00:14:21.919 { 00:14:21.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.919 "dma_device_type": 2 00:14:21.919 } 00:14:21.919 ], 00:14:21.919 "driver_specific": {} 00:14:21.919 } 00:14:21.919 ] 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.919 "name": "Existed_Raid", 00:14:21.919 "uuid": "f56db98d-2f88-49d5-beb5-15f8585aa32f", 00:14:21.919 "strip_size_kb": 64, 00:14:21.919 "state": "configuring", 00:14:21.919 "raid_level": "concat", 00:14:21.919 "superblock": true, 00:14:21.919 "num_base_bdevs": 2, 00:14:21.919 "num_base_bdevs_discovered": 1, 00:14:21.919 "num_base_bdevs_operational": 2, 00:14:21.919 "base_bdevs_list": [ 00:14:21.919 { 00:14:21.919 "name": "BaseBdev1", 00:14:21.919 "uuid": "49200f5a-2da7-4301-9a7b-9b891d506e28", 00:14:21.919 "is_configured": true, 00:14:21.919 "data_offset": 2048, 00:14:21.919 "data_size": 63488 00:14:21.919 }, 00:14:21.919 { 00:14:21.919 "name": "BaseBdev2", 00:14:21.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.919 "is_configured": false, 00:14:21.919 "data_offset": 0, 00:14:21.919 "data_size": 0 00:14:21.919 } 00:14:21.919 ] 00:14:21.919 }' 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.919 14:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.486 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.486 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.486 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.486 [2024-11-04 14:46:52.155363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.486 [2024-11-04 14:46:52.155471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:22.486 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.486 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.487 [2024-11-04 14:46:52.163404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.487 [2024-11-04 14:46:52.166379] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.487 [2024-11-04 14:46:52.166566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.487 "name": "Existed_Raid", 00:14:22.487 "uuid": "66e4599f-42d7-4284-b7ee-7d2d38f2d651", 00:14:22.487 "strip_size_kb": 64, 00:14:22.487 "state": "configuring", 00:14:22.487 "raid_level": "concat", 00:14:22.487 "superblock": true, 00:14:22.487 "num_base_bdevs": 2, 00:14:22.487 "num_base_bdevs_discovered": 1, 00:14:22.487 "num_base_bdevs_operational": 2, 00:14:22.487 "base_bdevs_list": [ 00:14:22.487 { 00:14:22.487 "name": "BaseBdev1", 00:14:22.487 "uuid": "49200f5a-2da7-4301-9a7b-9b891d506e28", 00:14:22.487 "is_configured": true, 00:14:22.487 "data_offset": 2048, 00:14:22.487 "data_size": 63488 00:14:22.487 }, 00:14:22.487 { 00:14:22.487 "name": "BaseBdev2", 00:14:22.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.487 "is_configured": false, 00:14:22.487 "data_offset": 0, 00:14:22.487 "data_size": 0 00:14:22.487 } 00:14:22.487 ] 00:14:22.487 }' 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.487 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.053 [2024-11-04 14:46:52.742011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.053 [2024-11-04 14:46:52.742424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:23.053 [2024-11-04 14:46:52.742446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:23.053 BaseBdev2 00:14:23.053 [2024-11-04 14:46:52.742799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:23.053 [2024-11-04 14:46:52.743007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:23.053 [2024-11-04 14:46:52.743029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:23.053 [2024-11-04 14:46:52.743258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.053 [ 00:14:23.053 { 00:14:23.053 "name": "BaseBdev2", 00:14:23.053 "aliases": [ 00:14:23.053 "0e0cf1fa-33c2-444f-873f-f91a12edb02e" 00:14:23.053 ], 00:14:23.053 "product_name": "Malloc disk", 00:14:23.053 "block_size": 512, 00:14:23.053 "num_blocks": 65536, 00:14:23.053 "uuid": "0e0cf1fa-33c2-444f-873f-f91a12edb02e", 00:14:23.053 "assigned_rate_limits": { 00:14:23.053 "rw_ios_per_sec": 0, 00:14:23.053 "rw_mbytes_per_sec": 0, 00:14:23.053 "r_mbytes_per_sec": 0, 00:14:23.053 "w_mbytes_per_sec": 0 00:14:23.053 }, 00:14:23.053 "claimed": true, 00:14:23.053 "claim_type": "exclusive_write", 00:14:23.053 "zoned": false, 00:14:23.053 "supported_io_types": { 00:14:23.053 "read": true, 00:14:23.053 "write": true, 00:14:23.053 "unmap": true, 00:14:23.053 "flush": true, 00:14:23.053 "reset": true, 00:14:23.053 "nvme_admin": false, 00:14:23.053 "nvme_io": false, 00:14:23.053 "nvme_io_md": false, 00:14:23.053 "write_zeroes": true, 00:14:23.053 "zcopy": true, 00:14:23.053 "get_zone_info": false, 00:14:23.053 "zone_management": false, 00:14:23.053 "zone_append": false, 00:14:23.053 "compare": false, 00:14:23.053 "compare_and_write": false, 00:14:23.053 "abort": true, 00:14:23.053 "seek_hole": false, 00:14:23.053 "seek_data": false, 00:14:23.053 "copy": true, 00:14:23.053 "nvme_iov_md": false 00:14:23.053 }, 00:14:23.053 "memory_domains": [ 00:14:23.053 { 00:14:23.053 "dma_device_id": "system", 00:14:23.053 "dma_device_type": 1 00:14:23.053 }, 00:14:23.053 { 00:14:23.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.053 "dma_device_type": 2 00:14:23.053 } 00:14:23.053 ], 00:14:23.053 "driver_specific": {} 00:14:23.053 } 00:14:23.053 ] 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.053 "name": "Existed_Raid", 00:14:23.053 "uuid": "66e4599f-42d7-4284-b7ee-7d2d38f2d651", 00:14:23.053 "strip_size_kb": 64, 00:14:23.053 "state": "online", 00:14:23.053 "raid_level": "concat", 00:14:23.053 "superblock": true, 00:14:23.053 "num_base_bdevs": 2, 00:14:23.053 "num_base_bdevs_discovered": 2, 00:14:23.053 "num_base_bdevs_operational": 2, 00:14:23.053 "base_bdevs_list": [ 00:14:23.053 { 00:14:23.053 "name": "BaseBdev1", 00:14:23.053 "uuid": "49200f5a-2da7-4301-9a7b-9b891d506e28", 00:14:23.053 "is_configured": true, 00:14:23.053 "data_offset": 2048, 00:14:23.053 "data_size": 63488 00:14:23.053 }, 00:14:23.053 { 00:14:23.053 "name": "BaseBdev2", 00:14:23.053 "uuid": "0e0cf1fa-33c2-444f-873f-f91a12edb02e", 00:14:23.053 "is_configured": true, 00:14:23.053 "data_offset": 2048, 00:14:23.053 "data_size": 63488 00:14:23.053 } 00:14:23.053 ] 00:14:23.053 }' 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.053 14:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.619 [2024-11-04 14:46:53.298638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.619 "name": "Existed_Raid", 00:14:23.619 "aliases": [ 00:14:23.619 "66e4599f-42d7-4284-b7ee-7d2d38f2d651" 00:14:23.619 ], 00:14:23.619 "product_name": "Raid Volume", 00:14:23.619 "block_size": 512, 00:14:23.619 "num_blocks": 126976, 00:14:23.619 "uuid": "66e4599f-42d7-4284-b7ee-7d2d38f2d651", 00:14:23.619 "assigned_rate_limits": { 00:14:23.619 "rw_ios_per_sec": 0, 00:14:23.619 "rw_mbytes_per_sec": 0, 00:14:23.619 "r_mbytes_per_sec": 0, 00:14:23.619 "w_mbytes_per_sec": 0 00:14:23.619 }, 00:14:23.619 "claimed": false, 00:14:23.619 "zoned": false, 00:14:23.619 "supported_io_types": { 00:14:23.619 "read": true, 00:14:23.619 "write": true, 00:14:23.619 "unmap": true, 00:14:23.619 "flush": true, 00:14:23.619 "reset": true, 00:14:23.619 "nvme_admin": false, 00:14:23.619 "nvme_io": false, 00:14:23.619 "nvme_io_md": false, 00:14:23.619 "write_zeroes": true, 00:14:23.619 "zcopy": false, 00:14:23.619 "get_zone_info": false, 00:14:23.619 "zone_management": false, 00:14:23.619 "zone_append": false, 00:14:23.619 "compare": false, 00:14:23.619 "compare_and_write": false, 00:14:23.619 "abort": false, 00:14:23.619 "seek_hole": false, 00:14:23.619 "seek_data": false, 00:14:23.619 "copy": false, 00:14:23.619 "nvme_iov_md": false 00:14:23.619 }, 00:14:23.619 "memory_domains": [ 00:14:23.619 { 00:14:23.619 "dma_device_id": "system", 00:14:23.619 "dma_device_type": 1 00:14:23.619 }, 00:14:23.619 { 00:14:23.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.619 "dma_device_type": 2 00:14:23.619 }, 00:14:23.619 { 00:14:23.619 "dma_device_id": "system", 00:14:23.619 "dma_device_type": 1 00:14:23.619 }, 00:14:23.619 { 00:14:23.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.619 "dma_device_type": 2 00:14:23.619 } 00:14:23.619 ], 00:14:23.619 "driver_specific": { 00:14:23.619 "raid": { 00:14:23.619 "uuid": "66e4599f-42d7-4284-b7ee-7d2d38f2d651", 00:14:23.619 "strip_size_kb": 64, 00:14:23.619 "state": "online", 00:14:23.619 "raid_level": "concat", 00:14:23.619 "superblock": true, 00:14:23.619 "num_base_bdevs": 2, 00:14:23.619 "num_base_bdevs_discovered": 2, 00:14:23.619 "num_base_bdevs_operational": 2, 00:14:23.619 "base_bdevs_list": [ 00:14:23.619 { 00:14:23.619 "name": "BaseBdev1", 00:14:23.619 "uuid": "49200f5a-2da7-4301-9a7b-9b891d506e28", 00:14:23.619 "is_configured": true, 00:14:23.619 "data_offset": 2048, 00:14:23.619 "data_size": 63488 00:14:23.619 }, 00:14:23.619 { 00:14:23.619 "name": "BaseBdev2", 00:14:23.619 "uuid": "0e0cf1fa-33c2-444f-873f-f91a12edb02e", 00:14:23.619 "is_configured": true, 00:14:23.619 "data_offset": 2048, 00:14:23.619 "data_size": 63488 00:14:23.619 } 00:14:23.619 ] 00:14:23.619 } 00:14:23.619 } 00:14:23.619 }' 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:23.619 BaseBdev2' 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.619 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.878 [2024-11-04 14:46:53.582497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.878 [2024-11-04 14:46:53.582816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.878 [2024-11-04 14:46:53.582917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.878 "name": "Existed_Raid", 00:14:23.878 "uuid": "66e4599f-42d7-4284-b7ee-7d2d38f2d651", 00:14:23.878 "strip_size_kb": 64, 00:14:23.878 "state": "offline", 00:14:23.878 "raid_level": "concat", 00:14:23.878 "superblock": true, 00:14:23.878 "num_base_bdevs": 2, 00:14:23.878 "num_base_bdevs_discovered": 1, 00:14:23.878 "num_base_bdevs_operational": 1, 00:14:23.878 "base_bdevs_list": [ 00:14:23.878 { 00:14:23.878 "name": null, 00:14:23.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.878 "is_configured": false, 00:14:23.878 "data_offset": 0, 00:14:23.878 "data_size": 63488 00:14:23.878 }, 00:14:23.878 { 00:14:23.878 "name": "BaseBdev2", 00:14:23.878 "uuid": "0e0cf1fa-33c2-444f-873f-f91a12edb02e", 00:14:23.878 "is_configured": true, 00:14:23.878 "data_offset": 2048, 00:14:23.878 "data_size": 63488 00:14:23.878 } 00:14:23.878 ] 00:14:23.878 }' 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.878 14:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.445 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.445 [2024-11-04 14:46:54.271536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.445 [2024-11-04 14:46:54.271867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62000 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62000 ']' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62000 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62000 00:14:24.703 killing process with pid 62000 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62000' 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62000 00:14:24.703 14:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62000 00:14:24.703 [2024-11-04 14:46:54.503475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.704 [2024-11-04 14:46:54.519042] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.114 14:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:26.114 00:14:26.114 real 0m5.731s 00:14:26.114 user 0m8.448s 00:14:26.114 sys 0m0.886s 00:14:26.114 14:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.114 ************************************ 00:14:26.114 END TEST raid_state_function_test_sb 00:14:26.114 ************************************ 00:14:26.114 14:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.114 14:46:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:26.114 14:46:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:26.114 14:46:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.114 14:46:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.114 ************************************ 00:14:26.114 START TEST raid_superblock_test 00:14:26.114 ************************************ 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62263 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62263 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62263 ']' 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:26.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:26.114 14:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.114 [2024-11-04 14:46:55.859149] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:26.114 [2024-11-04 14:46:55.859399] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62263 ] 00:14:26.372 [2024-11-04 14:46:56.036768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.372 [2024-11-04 14:46:56.187664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.630 [2024-11-04 14:46:56.424110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.630 [2024-11-04 14:46:56.424180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.195 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.195 malloc1 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.196 [2024-11-04 14:46:56.969435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.196 [2024-11-04 14:46:56.969763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.196 [2024-11-04 14:46:56.969844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.196 [2024-11-04 14:46:56.969981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.196 [2024-11-04 14:46:56.973097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.196 [2024-11-04 14:46:56.973278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.196 pt1 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.196 14:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.196 malloc2 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.196 [2024-11-04 14:46:57.032086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.196 [2024-11-04 14:46:57.032434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.196 [2024-11-04 14:46:57.032513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.196 [2024-11-04 14:46:57.032713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.196 [2024-11-04 14:46:57.035899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.196 [2024-11-04 14:46:57.035943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.196 pt2 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.196 [2024-11-04 14:46:57.040220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.196 [2024-11-04 14:46:57.043009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.196 [2024-11-04 14:46:57.043249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:27.196 [2024-11-04 14:46:57.043283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.196 [2024-11-04 14:46:57.043590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:27.196 [2024-11-04 14:46:57.043808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:27.196 [2024-11-04 14:46:57.043832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:27.196 [2024-11-04 14:46:57.044082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.196 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.455 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.455 "name": "raid_bdev1", 00:14:27.455 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:27.455 "strip_size_kb": 64, 00:14:27.455 "state": "online", 00:14:27.455 "raid_level": "concat", 00:14:27.455 "superblock": true, 00:14:27.455 "num_base_bdevs": 2, 00:14:27.455 "num_base_bdevs_discovered": 2, 00:14:27.455 "num_base_bdevs_operational": 2, 00:14:27.455 "base_bdevs_list": [ 00:14:27.455 { 00:14:27.455 "name": "pt1", 00:14:27.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.455 "is_configured": true, 00:14:27.455 "data_offset": 2048, 00:14:27.455 "data_size": 63488 00:14:27.455 }, 00:14:27.455 { 00:14:27.455 "name": "pt2", 00:14:27.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.455 "is_configured": true, 00:14:27.455 "data_offset": 2048, 00:14:27.455 "data_size": 63488 00:14:27.455 } 00:14:27.455 ] 00:14:27.455 }' 00:14:27.455 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.455 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 [2024-11-04 14:46:57.564776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.714 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.972 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.972 "name": "raid_bdev1", 00:14:27.972 "aliases": [ 00:14:27.972 "7b82f7d4-9090-464b-ac18-51de5dfd31d8" 00:14:27.972 ], 00:14:27.972 "product_name": "Raid Volume", 00:14:27.972 "block_size": 512, 00:14:27.972 "num_blocks": 126976, 00:14:27.972 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:27.972 "assigned_rate_limits": { 00:14:27.972 "rw_ios_per_sec": 0, 00:14:27.972 "rw_mbytes_per_sec": 0, 00:14:27.972 "r_mbytes_per_sec": 0, 00:14:27.972 "w_mbytes_per_sec": 0 00:14:27.972 }, 00:14:27.972 "claimed": false, 00:14:27.972 "zoned": false, 00:14:27.972 "supported_io_types": { 00:14:27.972 "read": true, 00:14:27.972 "write": true, 00:14:27.972 "unmap": true, 00:14:27.972 "flush": true, 00:14:27.972 "reset": true, 00:14:27.972 "nvme_admin": false, 00:14:27.972 "nvme_io": false, 00:14:27.972 "nvme_io_md": false, 00:14:27.972 "write_zeroes": true, 00:14:27.972 "zcopy": false, 00:14:27.972 "get_zone_info": false, 00:14:27.972 "zone_management": false, 00:14:27.972 "zone_append": false, 00:14:27.972 "compare": false, 00:14:27.972 "compare_and_write": false, 00:14:27.972 "abort": false, 00:14:27.972 "seek_hole": false, 00:14:27.972 "seek_data": false, 00:14:27.972 "copy": false, 00:14:27.972 "nvme_iov_md": false 00:14:27.972 }, 00:14:27.972 "memory_domains": [ 00:14:27.973 { 00:14:27.973 "dma_device_id": "system", 00:14:27.973 "dma_device_type": 1 00:14:27.973 }, 00:14:27.973 { 00:14:27.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.973 "dma_device_type": 2 00:14:27.973 }, 00:14:27.973 { 00:14:27.973 "dma_device_id": "system", 00:14:27.973 "dma_device_type": 1 00:14:27.973 }, 00:14:27.973 { 00:14:27.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.973 "dma_device_type": 2 00:14:27.973 } 00:14:27.973 ], 00:14:27.973 "driver_specific": { 00:14:27.973 "raid": { 00:14:27.973 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:27.973 "strip_size_kb": 64, 00:14:27.973 "state": "online", 00:14:27.973 "raid_level": "concat", 00:14:27.973 "superblock": true, 00:14:27.973 "num_base_bdevs": 2, 00:14:27.973 "num_base_bdevs_discovered": 2, 00:14:27.973 "num_base_bdevs_operational": 2, 00:14:27.973 "base_bdevs_list": [ 00:14:27.973 { 00:14:27.973 "name": "pt1", 00:14:27.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.973 "is_configured": true, 00:14:27.973 "data_offset": 2048, 00:14:27.973 "data_size": 63488 00:14:27.973 }, 00:14:27.973 { 00:14:27.973 "name": "pt2", 00:14:27.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.973 "is_configured": true, 00:14:27.973 "data_offset": 2048, 00:14:27.973 "data_size": 63488 00:14:27.973 } 00:14:27.973 ] 00:14:27.973 } 00:14:27.973 } 00:14:27.973 }' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:27.973 pt2' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.973 [2024-11-04 14:46:57.816831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.973 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7b82f7d4-9090-464b-ac18-51de5dfd31d8 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7b82f7d4-9090-464b-ac18-51de5dfd31d8 ']' 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.231 [2024-11-04 14:46:57.892441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.231 [2024-11-04 14:46:57.892700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.231 [2024-11-04 14:46:57.892937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.231 [2024-11-04 14:46:57.893019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.231 [2024-11-04 14:46:57.893041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.231 14:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.231 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:28.231 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.232 [2024-11-04 14:46:58.032625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:28.232 [2024-11-04 14:46:58.035587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:28.232 [2024-11-04 14:46:58.035885] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:28.232 [2024-11-04 14:46:58.035976] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:28.232 [2024-11-04 14:46:58.036004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.232 [2024-11-04 14:46:58.036021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:28.232 request: 00:14:28.232 { 00:14:28.232 "name": "raid_bdev1", 00:14:28.232 "raid_level": "concat", 00:14:28.232 "base_bdevs": [ 00:14:28.232 "malloc1", 00:14:28.232 "malloc2" 00:14:28.232 ], 00:14:28.232 "strip_size_kb": 64, 00:14:28.232 "superblock": false, 00:14:28.232 "method": "bdev_raid_create", 00:14:28.232 "req_id": 1 00:14:28.232 } 00:14:28.232 Got JSON-RPC error response 00:14:28.232 response: 00:14:28.232 { 00:14:28.232 "code": -17, 00:14:28.232 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:28.232 } 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.232 [2024-11-04 14:46:58.100599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.232 [2024-11-04 14:46:58.100829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.232 [2024-11-04 14:46:58.101023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:28.232 [2024-11-04 14:46:58.101153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.232 [2024-11-04 14:46:58.104394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.232 pt1 00:14:28.232 [2024-11-04 14:46:58.104548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.232 [2024-11-04 14:46:58.104662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:28.232 [2024-11-04 14:46:58.104749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.232 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.490 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.490 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.490 "name": "raid_bdev1", 00:14:28.490 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:28.490 "strip_size_kb": 64, 00:14:28.490 "state": "configuring", 00:14:28.490 "raid_level": "concat", 00:14:28.490 "superblock": true, 00:14:28.490 "num_base_bdevs": 2, 00:14:28.490 "num_base_bdevs_discovered": 1, 00:14:28.490 "num_base_bdevs_operational": 2, 00:14:28.490 "base_bdevs_list": [ 00:14:28.490 { 00:14:28.490 "name": "pt1", 00:14:28.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.490 "is_configured": true, 00:14:28.490 "data_offset": 2048, 00:14:28.490 "data_size": 63488 00:14:28.490 }, 00:14:28.490 { 00:14:28.490 "name": null, 00:14:28.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.490 "is_configured": false, 00:14:28.490 "data_offset": 2048, 00:14:28.490 "data_size": 63488 00:14:28.490 } 00:14:28.490 ] 00:14:28.490 }' 00:14:28.490 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.490 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.748 [2024-11-04 14:46:58.573041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.748 [2024-11-04 14:46:58.573416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.748 [2024-11-04 14:46:58.573493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:28.748 [2024-11-04 14:46:58.573672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.748 [2024-11-04 14:46:58.574440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.748 [2024-11-04 14:46:58.574481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.748 [2024-11-04 14:46:58.574599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.748 [2024-11-04 14:46:58.574640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.748 [2024-11-04 14:46:58.574810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.748 [2024-11-04 14:46:58.574832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.748 [2024-11-04 14:46:58.575161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:28.748 [2024-11-04 14:46:58.575399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.748 [2024-11-04 14:46:58.575416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:28.748 [2024-11-04 14:46:58.575598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.748 pt2 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.748 "name": "raid_bdev1", 00:14:28.748 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:28.748 "strip_size_kb": 64, 00:14:28.748 "state": "online", 00:14:28.748 "raid_level": "concat", 00:14:28.748 "superblock": true, 00:14:28.748 "num_base_bdevs": 2, 00:14:28.748 "num_base_bdevs_discovered": 2, 00:14:28.748 "num_base_bdevs_operational": 2, 00:14:28.748 "base_bdevs_list": [ 00:14:28.748 { 00:14:28.748 "name": "pt1", 00:14:28.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.748 "is_configured": true, 00:14:28.748 "data_offset": 2048, 00:14:28.748 "data_size": 63488 00:14:28.748 }, 00:14:28.748 { 00:14:28.748 "name": "pt2", 00:14:28.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.748 "is_configured": true, 00:14:28.748 "data_offset": 2048, 00:14:28.748 "data_size": 63488 00:14:28.748 } 00:14:28.748 ] 00:14:28.748 }' 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.748 14:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.314 [2024-11-04 14:46:59.061512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.314 "name": "raid_bdev1", 00:14:29.314 "aliases": [ 00:14:29.314 "7b82f7d4-9090-464b-ac18-51de5dfd31d8" 00:14:29.314 ], 00:14:29.314 "product_name": "Raid Volume", 00:14:29.314 "block_size": 512, 00:14:29.314 "num_blocks": 126976, 00:14:29.314 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:29.314 "assigned_rate_limits": { 00:14:29.314 "rw_ios_per_sec": 0, 00:14:29.314 "rw_mbytes_per_sec": 0, 00:14:29.314 "r_mbytes_per_sec": 0, 00:14:29.314 "w_mbytes_per_sec": 0 00:14:29.314 }, 00:14:29.314 "claimed": false, 00:14:29.314 "zoned": false, 00:14:29.314 "supported_io_types": { 00:14:29.314 "read": true, 00:14:29.314 "write": true, 00:14:29.314 "unmap": true, 00:14:29.314 "flush": true, 00:14:29.314 "reset": true, 00:14:29.314 "nvme_admin": false, 00:14:29.314 "nvme_io": false, 00:14:29.314 "nvme_io_md": false, 00:14:29.314 "write_zeroes": true, 00:14:29.314 "zcopy": false, 00:14:29.314 "get_zone_info": false, 00:14:29.314 "zone_management": false, 00:14:29.314 "zone_append": false, 00:14:29.314 "compare": false, 00:14:29.314 "compare_and_write": false, 00:14:29.314 "abort": false, 00:14:29.314 "seek_hole": false, 00:14:29.314 "seek_data": false, 00:14:29.314 "copy": false, 00:14:29.314 "nvme_iov_md": false 00:14:29.314 }, 00:14:29.314 "memory_domains": [ 00:14:29.314 { 00:14:29.314 "dma_device_id": "system", 00:14:29.314 "dma_device_type": 1 00:14:29.314 }, 00:14:29.314 { 00:14:29.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.314 "dma_device_type": 2 00:14:29.314 }, 00:14:29.314 { 00:14:29.314 "dma_device_id": "system", 00:14:29.314 "dma_device_type": 1 00:14:29.314 }, 00:14:29.314 { 00:14:29.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.314 "dma_device_type": 2 00:14:29.314 } 00:14:29.314 ], 00:14:29.314 "driver_specific": { 00:14:29.314 "raid": { 00:14:29.314 "uuid": "7b82f7d4-9090-464b-ac18-51de5dfd31d8", 00:14:29.314 "strip_size_kb": 64, 00:14:29.314 "state": "online", 00:14:29.314 "raid_level": "concat", 00:14:29.314 "superblock": true, 00:14:29.314 "num_base_bdevs": 2, 00:14:29.314 "num_base_bdevs_discovered": 2, 00:14:29.314 "num_base_bdevs_operational": 2, 00:14:29.314 "base_bdevs_list": [ 00:14:29.314 { 00:14:29.314 "name": "pt1", 00:14:29.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.314 "is_configured": true, 00:14:29.314 "data_offset": 2048, 00:14:29.314 "data_size": 63488 00:14:29.314 }, 00:14:29.314 { 00:14:29.314 "name": "pt2", 00:14:29.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.314 "is_configured": true, 00:14:29.314 "data_offset": 2048, 00:14:29.314 "data_size": 63488 00:14:29.314 } 00:14:29.314 ] 00:14:29.314 } 00:14:29.314 } 00:14:29.314 }' 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.314 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.314 pt2' 00:14:29.315 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.573 [2024-11-04 14:46:59.317529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7b82f7d4-9090-464b-ac18-51de5dfd31d8 '!=' 7b82f7d4-9090-464b-ac18-51de5dfd31d8 ']' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62263 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62263 ']' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62263 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62263 00:14:29.573 killing process with pid 62263 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62263' 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62263 00:14:29.573 [2024-11-04 14:46:59.406526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.573 14:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62263 00:14:29.573 [2024-11-04 14:46:59.406688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.573 [2024-11-04 14:46:59.406762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.573 [2024-11-04 14:46:59.406782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:29.831 [2024-11-04 14:46:59.611021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.204 14:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:31.204 00:14:31.204 real 0m5.014s 00:14:31.204 user 0m7.266s 00:14:31.204 sys 0m0.772s 00:14:31.204 ************************************ 00:14:31.204 END TEST raid_superblock_test 00:14:31.204 ************************************ 00:14:31.204 14:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:31.204 14:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.204 14:47:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:14:31.204 14:47:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:31.204 14:47:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:31.204 14:47:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.204 ************************************ 00:14:31.204 START TEST raid_read_error_test 00:14:31.204 ************************************ 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5K6QkoIEys 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62475 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62475 00:14:31.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62475 ']' 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.204 14:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.204 [2024-11-04 14:47:00.960101] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:31.204 [2024-11-04 14:47:00.960750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62475 ] 00:14:31.463 [2024-11-04 14:47:01.152986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.463 [2024-11-04 14:47:01.314851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.722 [2024-11-04 14:47:01.572089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.722 [2024-11-04 14:47:01.572182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.288 14:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:32.288 14:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:32.288 14:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.288 14:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.288 14:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 BaseBdev1_malloc 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 true 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 [2024-11-04 14:47:02.042070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:32.288 [2024-11-04 14:47:02.042149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.288 [2024-11-04 14:47:02.042182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:32.288 [2024-11-04 14:47:02.042202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.288 [2024-11-04 14:47:02.045360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.288 [2024-11-04 14:47:02.045539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.288 BaseBdev1 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 BaseBdev2_malloc 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 true 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 [2024-11-04 14:47:02.112509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:32.288 [2024-11-04 14:47:02.112586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.288 [2024-11-04 14:47:02.112614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:32.288 [2024-11-04 14:47:02.112633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.288 [2024-11-04 14:47:02.115641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.288 [2024-11-04 14:47:02.115690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.288 BaseBdev2 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 [2024-11-04 14:47:02.120665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.288 [2024-11-04 14:47:02.123418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.288 [2024-11-04 14:47:02.123695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:32.288 [2024-11-04 14:47:02.123720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:32.288 [2024-11-04 14:47:02.124042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:32.288 [2024-11-04 14:47:02.124315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:32.288 [2024-11-04 14:47:02.124348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:32.288 [2024-11-04 14:47:02.124631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:32.288 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.289 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.547 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.547 "name": "raid_bdev1", 00:14:32.547 "uuid": "26807236-b7e7-4b02-a7e5-7e29f81ab52a", 00:14:32.547 "strip_size_kb": 64, 00:14:32.547 "state": "online", 00:14:32.547 "raid_level": "concat", 00:14:32.547 "superblock": true, 00:14:32.547 "num_base_bdevs": 2, 00:14:32.547 "num_base_bdevs_discovered": 2, 00:14:32.547 "num_base_bdevs_operational": 2, 00:14:32.547 "base_bdevs_list": [ 00:14:32.547 { 00:14:32.547 "name": "BaseBdev1", 00:14:32.547 "uuid": "3d9150b8-ddf1-5d9d-86cf-56c9a64d30e0", 00:14:32.547 "is_configured": true, 00:14:32.547 "data_offset": 2048, 00:14:32.547 "data_size": 63488 00:14:32.547 }, 00:14:32.547 { 00:14:32.547 "name": "BaseBdev2", 00:14:32.547 "uuid": "fa07a2e9-ef65-55a6-84eb-f7d854592875", 00:14:32.547 "is_configured": true, 00:14:32.547 "data_offset": 2048, 00:14:32.547 "data_size": 63488 00:14:32.547 } 00:14:32.547 ] 00:14:32.547 }' 00:14:32.547 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.547 14:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.806 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:32.806 14:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:33.065 [2024-11-04 14:47:02.794426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.999 "name": "raid_bdev1", 00:14:33.999 "uuid": "26807236-b7e7-4b02-a7e5-7e29f81ab52a", 00:14:33.999 "strip_size_kb": 64, 00:14:33.999 "state": "online", 00:14:33.999 "raid_level": "concat", 00:14:33.999 "superblock": true, 00:14:33.999 "num_base_bdevs": 2, 00:14:33.999 "num_base_bdevs_discovered": 2, 00:14:33.999 "num_base_bdevs_operational": 2, 00:14:33.999 "base_bdevs_list": [ 00:14:33.999 { 00:14:33.999 "name": "BaseBdev1", 00:14:33.999 "uuid": "3d9150b8-ddf1-5d9d-86cf-56c9a64d30e0", 00:14:33.999 "is_configured": true, 00:14:33.999 "data_offset": 2048, 00:14:33.999 "data_size": 63488 00:14:33.999 }, 00:14:33.999 { 00:14:33.999 "name": "BaseBdev2", 00:14:33.999 "uuid": "fa07a2e9-ef65-55a6-84eb-f7d854592875", 00:14:33.999 "is_configured": true, 00:14:33.999 "data_offset": 2048, 00:14:33.999 "data_size": 63488 00:14:33.999 } 00:14:33.999 ] 00:14:33.999 }' 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.999 14:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.566 [2024-11-04 14:47:04.204003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.566 [2024-11-04 14:47:04.204197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.566 [2024-11-04 14:47:04.207772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.566 [2024-11-04 14:47:04.207950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.566 [2024-11-04 14:47:04.208042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.566 [2024-11-04 14:47:04.208282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:14:34.566 "results": [ 00:14:34.566 { 00:14:34.566 "job": "raid_bdev1", 00:14:34.566 "core_mask": "0x1", 00:14:34.566 "workload": "randrw", 00:14:34.566 "percentage": 50, 00:14:34.566 "status": "finished", 00:14:34.566 "queue_depth": 1, 00:14:34.566 "io_size": 131072, 00:14:34.566 "runtime": 1.407315, 00:14:34.566 "iops": 9959.390754735081, 00:14:34.566 "mibps": 1244.9238443418851, 00:14:34.566 "io_failed": 1, 00:14:34.566 "io_timeout": 0, 00:14:34.566 "avg_latency_us": 141.62454059032214, 00:14:34.566 "min_latency_us": 43.985454545454544, 00:14:34.566 "max_latency_us": 1884.16 00:14:34.566 } 00:14:34.566 ], 00:14:34.566 "core_count": 1 00:14:34.566 } 00:14:34.566 te offline 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62475 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62475 ']' 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62475 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62475 00:14:34.566 killing process with pid 62475 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62475' 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62475 00:14:34.566 [2024-11-04 14:47:04.247577] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.566 14:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62475 00:14:34.566 [2024-11-04 14:47:04.381911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5K6QkoIEys 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:35.943 ************************************ 00:14:35.943 END TEST raid_read_error_test 00:14:35.943 ************************************ 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:35.943 00:14:35.943 real 0m4.753s 00:14:35.943 user 0m5.867s 00:14:35.943 sys 0m0.672s 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:35.943 14:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.943 14:47:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:14:35.943 14:47:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:35.943 14:47:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:35.943 14:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.943 ************************************ 00:14:35.943 START TEST raid_write_error_test 00:14:35.943 ************************************ 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dpokBS2hwU 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62620 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62620 00:14:35.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62620 ']' 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:35.943 14:47:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.944 [2024-11-04 14:47:05.781179] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:35.944 [2024-11-04 14:47:05.781483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62620 ] 00:14:36.202 [2024-11-04 14:47:05.979333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.460 [2024-11-04 14:47:06.138398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.717 [2024-11-04 14:47:06.355940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.717 [2024-11-04 14:47:06.356009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 BaseBdev1_malloc 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 true 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 [2024-11-04 14:47:06.779008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:36.976 [2024-11-04 14:47:06.779094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.976 [2024-11-04 14:47:06.779126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:36.976 [2024-11-04 14:47:06.779146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.976 [2024-11-04 14:47:06.782031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.976 [2024-11-04 14:47:06.782329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.976 BaseBdev1 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 BaseBdev2_malloc 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 true 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 [2024-11-04 14:47:06.841963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:36.976 [2024-11-04 14:47:06.842269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.976 [2024-11-04 14:47:06.842307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:36.976 [2024-11-04 14:47:06.842328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.976 [2024-11-04 14:47:06.845112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.976 [2024-11-04 14:47:06.845164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.976 BaseBdev2 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.976 [2024-11-04 14:47:06.850113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.976 [2024-11-04 14:47:06.852534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.976 [2024-11-04 14:47:06.852945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:36.976 [2024-11-04 14:47:06.852976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:36.976 [2024-11-04 14:47:06.853296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:36.976 [2024-11-04 14:47:06.853561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:36.976 [2024-11-04 14:47:06.853582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:36.976 [2024-11-04 14:47:06.853783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.976 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.977 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.235 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.235 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.235 "name": "raid_bdev1", 00:14:37.235 "uuid": "de82b45e-69f0-4523-a885-f87e308311bb", 00:14:37.235 "strip_size_kb": 64, 00:14:37.235 "state": "online", 00:14:37.235 "raid_level": "concat", 00:14:37.235 "superblock": true, 00:14:37.235 "num_base_bdevs": 2, 00:14:37.235 "num_base_bdevs_discovered": 2, 00:14:37.235 "num_base_bdevs_operational": 2, 00:14:37.235 "base_bdevs_list": [ 00:14:37.235 { 00:14:37.235 "name": "BaseBdev1", 00:14:37.235 "uuid": "f9604f95-e7eb-529b-a589-c8b58859f56c", 00:14:37.235 "is_configured": true, 00:14:37.235 "data_offset": 2048, 00:14:37.235 "data_size": 63488 00:14:37.235 }, 00:14:37.235 { 00:14:37.235 "name": "BaseBdev2", 00:14:37.235 "uuid": "43ece71e-658d-560d-8018-6f5f45f44328", 00:14:37.235 "is_configured": true, 00:14:37.235 "data_offset": 2048, 00:14:37.235 "data_size": 63488 00:14:37.235 } 00:14:37.235 ] 00:14:37.235 }' 00:14:37.235 14:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.235 14:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.494 14:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:37.494 14:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:37.752 [2024-11-04 14:47:07.539859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:38.686 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.687 "name": "raid_bdev1", 00:14:38.687 "uuid": "de82b45e-69f0-4523-a885-f87e308311bb", 00:14:38.687 "strip_size_kb": 64, 00:14:38.687 "state": "online", 00:14:38.687 "raid_level": "concat", 00:14:38.687 "superblock": true, 00:14:38.687 "num_base_bdevs": 2, 00:14:38.687 "num_base_bdevs_discovered": 2, 00:14:38.687 "num_base_bdevs_operational": 2, 00:14:38.687 "base_bdevs_list": [ 00:14:38.687 { 00:14:38.687 "name": "BaseBdev1", 00:14:38.687 "uuid": "f9604f95-e7eb-529b-a589-c8b58859f56c", 00:14:38.687 "is_configured": true, 00:14:38.687 "data_offset": 2048, 00:14:38.687 "data_size": 63488 00:14:38.687 }, 00:14:38.687 { 00:14:38.687 "name": "BaseBdev2", 00:14:38.687 "uuid": "43ece71e-658d-560d-8018-6f5f45f44328", 00:14:38.687 "is_configured": true, 00:14:38.687 "data_offset": 2048, 00:14:38.687 "data_size": 63488 00:14:38.687 } 00:14:38.687 ] 00:14:38.687 }' 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.687 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.289 [2024-11-04 14:47:08.934280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.289 [2024-11-04 14:47:08.934347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.289 [2024-11-04 14:47:08.937768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.289 { 00:14:39.289 "results": [ 00:14:39.289 { 00:14:39.289 "job": "raid_bdev1", 00:14:39.289 "core_mask": "0x1", 00:14:39.289 "workload": "randrw", 00:14:39.289 "percentage": 50, 00:14:39.289 "status": "finished", 00:14:39.289 "queue_depth": 1, 00:14:39.289 "io_size": 131072, 00:14:39.289 "runtime": 1.391575, 00:14:39.289 "iops": 9631.532615920809, 00:14:39.289 "mibps": 1203.941576990101, 00:14:39.289 "io_failed": 1, 00:14:39.289 "io_timeout": 0, 00:14:39.289 "avg_latency_us": 146.76771655679445, 00:14:39.289 "min_latency_us": 44.916363636363634, 00:14:39.289 "max_latency_us": 1936.290909090909 00:14:39.289 } 00:14:39.289 ], 00:14:39.289 "core_count": 1 00:14:39.289 } 00:14:39.289 [2024-11-04 14:47:08.937997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.289 [2024-11-04 14:47:08.938066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.289 [2024-11-04 14:47:08.938093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62620 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62620 ']' 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62620 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62620 00:14:39.289 killing process with pid 62620 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62620' 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62620 00:14:39.289 14:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62620 00:14:39.289 [2024-11-04 14:47:08.973077] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.289 [2024-11-04 14:47:09.109013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dpokBS2hwU 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:40.677 00:14:40.677 real 0m4.699s 00:14:40.677 user 0m5.888s 00:14:40.677 sys 0m0.588s 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.677 ************************************ 00:14:40.677 END TEST raid_write_error_test 00:14:40.677 ************************************ 00:14:40.677 14:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.677 14:47:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:40.677 14:47:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:40.677 14:47:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:40.677 14:47:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.677 14:47:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.677 ************************************ 00:14:40.677 START TEST raid_state_function_test 00:14:40.677 ************************************ 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:40.677 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:40.678 Process raid pid: 62764 00:14:40.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62764 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62764' 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62764 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62764 ']' 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.678 14:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.678 [2024-11-04 14:47:10.497762] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:40.678 [2024-11-04 14:47:10.498247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.936 [2024-11-04 14:47:10.684325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.194 [2024-11-04 14:47:10.833172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.194 [2024-11-04 14:47:11.066409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.194 [2024-11-04 14:47:11.066728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.760 [2024-11-04 14:47:11.503266] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.760 [2024-11-04 14:47:11.503669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.760 [2024-11-04 14:47:11.503700] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.760 [2024-11-04 14:47:11.503720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.760 "name": "Existed_Raid", 00:14:41.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.760 "strip_size_kb": 0, 00:14:41.760 "state": "configuring", 00:14:41.760 "raid_level": "raid1", 00:14:41.760 "superblock": false, 00:14:41.760 "num_base_bdevs": 2, 00:14:41.760 "num_base_bdevs_discovered": 0, 00:14:41.760 "num_base_bdevs_operational": 2, 00:14:41.760 "base_bdevs_list": [ 00:14:41.760 { 00:14:41.760 "name": "BaseBdev1", 00:14:41.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.760 "is_configured": false, 00:14:41.760 "data_offset": 0, 00:14:41.760 "data_size": 0 00:14:41.760 }, 00:14:41.760 { 00:14:41.760 "name": "BaseBdev2", 00:14:41.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.760 "is_configured": false, 00:14:41.760 "data_offset": 0, 00:14:41.760 "data_size": 0 00:14:41.760 } 00:14:41.760 ] 00:14:41.760 }' 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.760 14:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.327 [2024-11-04 14:47:12.011332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.327 [2024-11-04 14:47:12.011413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.327 [2024-11-04 14:47:12.019280] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.327 [2024-11-04 14:47:12.019368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.327 [2024-11-04 14:47:12.019387] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.327 [2024-11-04 14:47:12.019407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.327 [2024-11-04 14:47:12.068643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.327 BaseBdev1 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.327 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.327 [ 00:14:42.327 { 00:14:42.327 "name": "BaseBdev1", 00:14:42.327 "aliases": [ 00:14:42.327 "08e5548f-9c67-496a-9c2a-bc58ac743ace" 00:14:42.327 ], 00:14:42.327 "product_name": "Malloc disk", 00:14:42.327 "block_size": 512, 00:14:42.327 "num_blocks": 65536, 00:14:42.327 "uuid": "08e5548f-9c67-496a-9c2a-bc58ac743ace", 00:14:42.327 "assigned_rate_limits": { 00:14:42.327 "rw_ios_per_sec": 0, 00:14:42.327 "rw_mbytes_per_sec": 0, 00:14:42.327 "r_mbytes_per_sec": 0, 00:14:42.327 "w_mbytes_per_sec": 0 00:14:42.327 }, 00:14:42.327 "claimed": true, 00:14:42.327 "claim_type": "exclusive_write", 00:14:42.327 "zoned": false, 00:14:42.327 "supported_io_types": { 00:14:42.327 "read": true, 00:14:42.327 "write": true, 00:14:42.327 "unmap": true, 00:14:42.327 "flush": true, 00:14:42.327 "reset": true, 00:14:42.327 "nvme_admin": false, 00:14:42.327 "nvme_io": false, 00:14:42.328 "nvme_io_md": false, 00:14:42.328 "write_zeroes": true, 00:14:42.328 "zcopy": true, 00:14:42.328 "get_zone_info": false, 00:14:42.328 "zone_management": false, 00:14:42.328 "zone_append": false, 00:14:42.328 "compare": false, 00:14:42.328 "compare_and_write": false, 00:14:42.328 "abort": true, 00:14:42.328 "seek_hole": false, 00:14:42.328 "seek_data": false, 00:14:42.328 "copy": true, 00:14:42.328 "nvme_iov_md": false 00:14:42.328 }, 00:14:42.328 "memory_domains": [ 00:14:42.328 { 00:14:42.328 "dma_device_id": "system", 00:14:42.328 "dma_device_type": 1 00:14:42.328 }, 00:14:42.328 { 00:14:42.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.328 "dma_device_type": 2 00:14:42.328 } 00:14:42.328 ], 00:14:42.328 "driver_specific": {} 00:14:42.328 } 00:14:42.328 ] 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.328 "name": "Existed_Raid", 00:14:42.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.328 "strip_size_kb": 0, 00:14:42.328 "state": "configuring", 00:14:42.328 "raid_level": "raid1", 00:14:42.328 "superblock": false, 00:14:42.328 "num_base_bdevs": 2, 00:14:42.328 "num_base_bdevs_discovered": 1, 00:14:42.328 "num_base_bdevs_operational": 2, 00:14:42.328 "base_bdevs_list": [ 00:14:42.328 { 00:14:42.328 "name": "BaseBdev1", 00:14:42.328 "uuid": "08e5548f-9c67-496a-9c2a-bc58ac743ace", 00:14:42.328 "is_configured": true, 00:14:42.328 "data_offset": 0, 00:14:42.328 "data_size": 65536 00:14:42.328 }, 00:14:42.328 { 00:14:42.328 "name": "BaseBdev2", 00:14:42.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.328 "is_configured": false, 00:14:42.328 "data_offset": 0, 00:14:42.328 "data_size": 0 00:14:42.328 } 00:14:42.328 ] 00:14:42.328 }' 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.328 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.894 [2024-11-04 14:47:12.572872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.894 [2024-11-04 14:47:12.572953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.894 [2024-11-04 14:47:12.580938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.894 [2024-11-04 14:47:12.583863] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.894 [2024-11-04 14:47:12.584068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:42.894 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.895 "name": "Existed_Raid", 00:14:42.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.895 "strip_size_kb": 0, 00:14:42.895 "state": "configuring", 00:14:42.895 "raid_level": "raid1", 00:14:42.895 "superblock": false, 00:14:42.895 "num_base_bdevs": 2, 00:14:42.895 "num_base_bdevs_discovered": 1, 00:14:42.895 "num_base_bdevs_operational": 2, 00:14:42.895 "base_bdevs_list": [ 00:14:42.895 { 00:14:42.895 "name": "BaseBdev1", 00:14:42.895 "uuid": "08e5548f-9c67-496a-9c2a-bc58ac743ace", 00:14:42.895 "is_configured": true, 00:14:42.895 "data_offset": 0, 00:14:42.895 "data_size": 65536 00:14:42.895 }, 00:14:42.895 { 00:14:42.895 "name": "BaseBdev2", 00:14:42.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.895 "is_configured": false, 00:14:42.895 "data_offset": 0, 00:14:42.895 "data_size": 0 00:14:42.895 } 00:14:42.895 ] 00:14:42.895 }' 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.895 14:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.463 [2024-11-04 14:47:13.092351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.463 [2024-11-04 14:47:13.092448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:43.463 [2024-11-04 14:47:13.092462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:43.463 [2024-11-04 14:47:13.092824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:43.463 [2024-11-04 14:47:13.093057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:43.463 [2024-11-04 14:47:13.093083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:43.463 [2024-11-04 14:47:13.093488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.463 BaseBdev2 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.463 [ 00:14:43.463 { 00:14:43.463 "name": "BaseBdev2", 00:14:43.463 "aliases": [ 00:14:43.463 "11bb69a2-137c-409f-9360-47ad68bf8964" 00:14:43.463 ], 00:14:43.463 "product_name": "Malloc disk", 00:14:43.463 "block_size": 512, 00:14:43.463 "num_blocks": 65536, 00:14:43.463 "uuid": "11bb69a2-137c-409f-9360-47ad68bf8964", 00:14:43.463 "assigned_rate_limits": { 00:14:43.463 "rw_ios_per_sec": 0, 00:14:43.463 "rw_mbytes_per_sec": 0, 00:14:43.463 "r_mbytes_per_sec": 0, 00:14:43.463 "w_mbytes_per_sec": 0 00:14:43.463 }, 00:14:43.463 "claimed": true, 00:14:43.463 "claim_type": "exclusive_write", 00:14:43.463 "zoned": false, 00:14:43.463 "supported_io_types": { 00:14:43.463 "read": true, 00:14:43.463 "write": true, 00:14:43.463 "unmap": true, 00:14:43.463 "flush": true, 00:14:43.463 "reset": true, 00:14:43.463 "nvme_admin": false, 00:14:43.463 "nvme_io": false, 00:14:43.463 "nvme_io_md": false, 00:14:43.463 "write_zeroes": true, 00:14:43.463 "zcopy": true, 00:14:43.463 "get_zone_info": false, 00:14:43.463 "zone_management": false, 00:14:43.463 "zone_append": false, 00:14:43.463 "compare": false, 00:14:43.463 "compare_and_write": false, 00:14:43.463 "abort": true, 00:14:43.463 "seek_hole": false, 00:14:43.463 "seek_data": false, 00:14:43.463 "copy": true, 00:14:43.463 "nvme_iov_md": false 00:14:43.463 }, 00:14:43.463 "memory_domains": [ 00:14:43.463 { 00:14:43.463 "dma_device_id": "system", 00:14:43.463 "dma_device_type": 1 00:14:43.463 }, 00:14:43.463 { 00:14:43.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.463 "dma_device_type": 2 00:14:43.463 } 00:14:43.463 ], 00:14:43.463 "driver_specific": {} 00:14:43.463 } 00:14:43.463 ] 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.463 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.464 "name": "Existed_Raid", 00:14:43.464 "uuid": "e8ab7b16-a8c6-472d-b194-4910f7a31f10", 00:14:43.464 "strip_size_kb": 0, 00:14:43.464 "state": "online", 00:14:43.464 "raid_level": "raid1", 00:14:43.464 "superblock": false, 00:14:43.464 "num_base_bdevs": 2, 00:14:43.464 "num_base_bdevs_discovered": 2, 00:14:43.464 "num_base_bdevs_operational": 2, 00:14:43.464 "base_bdevs_list": [ 00:14:43.464 { 00:14:43.464 "name": "BaseBdev1", 00:14:43.464 "uuid": "08e5548f-9c67-496a-9c2a-bc58ac743ace", 00:14:43.464 "is_configured": true, 00:14:43.464 "data_offset": 0, 00:14:43.464 "data_size": 65536 00:14:43.464 }, 00:14:43.464 { 00:14:43.464 "name": "BaseBdev2", 00:14:43.464 "uuid": "11bb69a2-137c-409f-9360-47ad68bf8964", 00:14:43.464 "is_configured": true, 00:14:43.464 "data_offset": 0, 00:14:43.464 "data_size": 65536 00:14:43.464 } 00:14:43.464 ] 00:14:43.464 }' 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.464 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 [2024-11-04 14:47:13.636929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.031 "name": "Existed_Raid", 00:14:44.031 "aliases": [ 00:14:44.031 "e8ab7b16-a8c6-472d-b194-4910f7a31f10" 00:14:44.031 ], 00:14:44.031 "product_name": "Raid Volume", 00:14:44.031 "block_size": 512, 00:14:44.031 "num_blocks": 65536, 00:14:44.031 "uuid": "e8ab7b16-a8c6-472d-b194-4910f7a31f10", 00:14:44.031 "assigned_rate_limits": { 00:14:44.031 "rw_ios_per_sec": 0, 00:14:44.031 "rw_mbytes_per_sec": 0, 00:14:44.031 "r_mbytes_per_sec": 0, 00:14:44.031 "w_mbytes_per_sec": 0 00:14:44.031 }, 00:14:44.031 "claimed": false, 00:14:44.031 "zoned": false, 00:14:44.031 "supported_io_types": { 00:14:44.031 "read": true, 00:14:44.031 "write": true, 00:14:44.031 "unmap": false, 00:14:44.031 "flush": false, 00:14:44.031 "reset": true, 00:14:44.031 "nvme_admin": false, 00:14:44.031 "nvme_io": false, 00:14:44.031 "nvme_io_md": false, 00:14:44.031 "write_zeroes": true, 00:14:44.031 "zcopy": false, 00:14:44.031 "get_zone_info": false, 00:14:44.031 "zone_management": false, 00:14:44.031 "zone_append": false, 00:14:44.031 "compare": false, 00:14:44.031 "compare_and_write": false, 00:14:44.031 "abort": false, 00:14:44.031 "seek_hole": false, 00:14:44.031 "seek_data": false, 00:14:44.031 "copy": false, 00:14:44.031 "nvme_iov_md": false 00:14:44.031 }, 00:14:44.031 "memory_domains": [ 00:14:44.031 { 00:14:44.031 "dma_device_id": "system", 00:14:44.031 "dma_device_type": 1 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.031 "dma_device_type": 2 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "dma_device_id": "system", 00:14:44.031 "dma_device_type": 1 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.031 "dma_device_type": 2 00:14:44.031 } 00:14:44.031 ], 00:14:44.031 "driver_specific": { 00:14:44.031 "raid": { 00:14:44.031 "uuid": "e8ab7b16-a8c6-472d-b194-4910f7a31f10", 00:14:44.031 "strip_size_kb": 0, 00:14:44.031 "state": "online", 00:14:44.031 "raid_level": "raid1", 00:14:44.031 "superblock": false, 00:14:44.031 "num_base_bdevs": 2, 00:14:44.031 "num_base_bdevs_discovered": 2, 00:14:44.031 "num_base_bdevs_operational": 2, 00:14:44.031 "base_bdevs_list": [ 00:14:44.031 { 00:14:44.031 "name": "BaseBdev1", 00:14:44.031 "uuid": "08e5548f-9c67-496a-9c2a-bc58ac743ace", 00:14:44.031 "is_configured": true, 00:14:44.031 "data_offset": 0, 00:14:44.031 "data_size": 65536 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "name": "BaseBdev2", 00:14:44.031 "uuid": "11bb69a2-137c-409f-9360-47ad68bf8964", 00:14:44.031 "is_configured": true, 00:14:44.031 "data_offset": 0, 00:14:44.031 "data_size": 65536 00:14:44.031 } 00:14:44.031 ] 00:14:44.031 } 00:14:44.031 } 00:14:44.031 }' 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:44.031 BaseBdev2' 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.031 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.032 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.032 [2024-11-04 14:47:13.868728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.290 14:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.290 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.290 "name": "Existed_Raid", 00:14:44.290 "uuid": "e8ab7b16-a8c6-472d-b194-4910f7a31f10", 00:14:44.290 "strip_size_kb": 0, 00:14:44.290 "state": "online", 00:14:44.290 "raid_level": "raid1", 00:14:44.290 "superblock": false, 00:14:44.290 "num_base_bdevs": 2, 00:14:44.290 "num_base_bdevs_discovered": 1, 00:14:44.290 "num_base_bdevs_operational": 1, 00:14:44.290 "base_bdevs_list": [ 00:14:44.290 { 00:14:44.290 "name": null, 00:14:44.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.290 "is_configured": false, 00:14:44.290 "data_offset": 0, 00:14:44.290 "data_size": 65536 00:14:44.290 }, 00:14:44.290 { 00:14:44.290 "name": "BaseBdev2", 00:14:44.290 "uuid": "11bb69a2-137c-409f-9360-47ad68bf8964", 00:14:44.290 "is_configured": true, 00:14:44.290 "data_offset": 0, 00:14:44.290 "data_size": 65536 00:14:44.290 } 00:14:44.290 ] 00:14:44.290 }' 00:14:44.290 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.290 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.857 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:44.857 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.857 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.857 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.857 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.857 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.858 [2024-11-04 14:47:14.556703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.858 [2024-11-04 14:47:14.556853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.858 [2024-11-04 14:47:14.653278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.858 [2024-11-04 14:47:14.653377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.858 [2024-11-04 14:47:14.653412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62764 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62764 ']' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62764 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62764 00:14:44.858 killing process with pid 62764 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62764' 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62764 00:14:44.858 14:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62764 00:14:44.858 [2024-11-04 14:47:14.731631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.858 [2024-11-04 14:47:14.747379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.233 14:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:46.233 00:14:46.234 real 0m5.504s 00:14:46.234 user 0m8.128s 00:14:46.234 sys 0m0.842s 00:14:46.234 ************************************ 00:14:46.234 END TEST raid_state_function_test 00:14:46.234 ************************************ 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 14:47:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:46.234 14:47:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:46.234 14:47:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.234 14:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 ************************************ 00:14:46.234 START TEST raid_state_function_test_sb 00:14:46.234 ************************************ 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63021 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63021' 00:14:46.234 Process raid pid: 63021 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63021 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63021 ']' 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.234 14:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.234 [2024-11-04 14:47:16.056646] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:46.234 [2024-11-04 14:47:16.056819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.493 [2024-11-04 14:47:16.247424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.751 [2024-11-04 14:47:16.421549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.009 [2024-11-04 14:47:16.653152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.009 [2024-11-04 14:47:16.653240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.267 [2024-11-04 14:47:17.056058] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.267 [2024-11-04 14:47:17.056129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.267 [2024-11-04 14:47:17.056148] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.267 [2024-11-04 14:47:17.056165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.267 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.267 "name": "Existed_Raid", 00:14:47.267 "uuid": "5186a11c-b51d-4717-b113-c5b008a890c2", 00:14:47.267 "strip_size_kb": 0, 00:14:47.267 "state": "configuring", 00:14:47.267 "raid_level": "raid1", 00:14:47.267 "superblock": true, 00:14:47.267 "num_base_bdevs": 2, 00:14:47.267 "num_base_bdevs_discovered": 0, 00:14:47.267 "num_base_bdevs_operational": 2, 00:14:47.267 "base_bdevs_list": [ 00:14:47.267 { 00:14:47.267 "name": "BaseBdev1", 00:14:47.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.267 "is_configured": false, 00:14:47.268 "data_offset": 0, 00:14:47.268 "data_size": 0 00:14:47.268 }, 00:14:47.268 { 00:14:47.268 "name": "BaseBdev2", 00:14:47.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.268 "is_configured": false, 00:14:47.268 "data_offset": 0, 00:14:47.268 "data_size": 0 00:14:47.268 } 00:14:47.268 ] 00:14:47.268 }' 00:14:47.268 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.268 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 [2024-11-04 14:47:17.572108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.835 [2024-11-04 14:47:17.572163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 [2024-11-04 14:47:17.580092] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.835 [2024-11-04 14:47:17.580148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.835 [2024-11-04 14:47:17.580164] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.835 [2024-11-04 14:47:17.580184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 [2024-11-04 14:47:17.629145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.835 BaseBdev1 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.835 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 [ 00:14:47.835 { 00:14:47.835 "name": "BaseBdev1", 00:14:47.835 "aliases": [ 00:14:47.835 "669fc4a0-c6bb-4764-b875-d0b63206639c" 00:14:47.835 ], 00:14:47.835 "product_name": "Malloc disk", 00:14:47.835 "block_size": 512, 00:14:47.835 "num_blocks": 65536, 00:14:47.835 "uuid": "669fc4a0-c6bb-4764-b875-d0b63206639c", 00:14:47.836 "assigned_rate_limits": { 00:14:47.836 "rw_ios_per_sec": 0, 00:14:47.836 "rw_mbytes_per_sec": 0, 00:14:47.836 "r_mbytes_per_sec": 0, 00:14:47.836 "w_mbytes_per_sec": 0 00:14:47.836 }, 00:14:47.836 "claimed": true, 00:14:47.836 "claim_type": "exclusive_write", 00:14:47.836 "zoned": false, 00:14:47.836 "supported_io_types": { 00:14:47.836 "read": true, 00:14:47.836 "write": true, 00:14:47.836 "unmap": true, 00:14:47.836 "flush": true, 00:14:47.836 "reset": true, 00:14:47.836 "nvme_admin": false, 00:14:47.836 "nvme_io": false, 00:14:47.836 "nvme_io_md": false, 00:14:47.836 "write_zeroes": true, 00:14:47.836 "zcopy": true, 00:14:47.836 "get_zone_info": false, 00:14:47.836 "zone_management": false, 00:14:47.836 "zone_append": false, 00:14:47.836 "compare": false, 00:14:47.836 "compare_and_write": false, 00:14:47.836 "abort": true, 00:14:47.836 "seek_hole": false, 00:14:47.836 "seek_data": false, 00:14:47.836 "copy": true, 00:14:47.836 "nvme_iov_md": false 00:14:47.836 }, 00:14:47.836 "memory_domains": [ 00:14:47.836 { 00:14:47.836 "dma_device_id": "system", 00:14:47.836 "dma_device_type": 1 00:14:47.836 }, 00:14:47.836 { 00:14:47.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.836 "dma_device_type": 2 00:14:47.836 } 00:14:47.836 ], 00:14:47.836 "driver_specific": {} 00:14:47.836 } 00:14:47.836 ] 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.836 "name": "Existed_Raid", 00:14:47.836 "uuid": "b202bc20-9cac-4201-9648-b2ccdbcda516", 00:14:47.836 "strip_size_kb": 0, 00:14:47.836 "state": "configuring", 00:14:47.836 "raid_level": "raid1", 00:14:47.836 "superblock": true, 00:14:47.836 "num_base_bdevs": 2, 00:14:47.836 "num_base_bdevs_discovered": 1, 00:14:47.836 "num_base_bdevs_operational": 2, 00:14:47.836 "base_bdevs_list": [ 00:14:47.836 { 00:14:47.836 "name": "BaseBdev1", 00:14:47.836 "uuid": "669fc4a0-c6bb-4764-b875-d0b63206639c", 00:14:47.836 "is_configured": true, 00:14:47.836 "data_offset": 2048, 00:14:47.836 "data_size": 63488 00:14:47.836 }, 00:14:47.836 { 00:14:47.836 "name": "BaseBdev2", 00:14:47.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.836 "is_configured": false, 00:14:47.836 "data_offset": 0, 00:14:47.836 "data_size": 0 00:14:47.836 } 00:14:47.836 ] 00:14:47.836 }' 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.836 14:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.402 [2024-11-04 14:47:18.185369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.402 [2024-11-04 14:47:18.185460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.402 [2024-11-04 14:47:18.193448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.402 [2024-11-04 14:47:18.196168] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.402 [2024-11-04 14:47:18.196236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.402 "name": "Existed_Raid", 00:14:48.402 "uuid": "669bc924-7995-4b21-9013-adc52f43564e", 00:14:48.402 "strip_size_kb": 0, 00:14:48.402 "state": "configuring", 00:14:48.402 "raid_level": "raid1", 00:14:48.402 "superblock": true, 00:14:48.402 "num_base_bdevs": 2, 00:14:48.402 "num_base_bdevs_discovered": 1, 00:14:48.402 "num_base_bdevs_operational": 2, 00:14:48.402 "base_bdevs_list": [ 00:14:48.402 { 00:14:48.402 "name": "BaseBdev1", 00:14:48.402 "uuid": "669fc4a0-c6bb-4764-b875-d0b63206639c", 00:14:48.402 "is_configured": true, 00:14:48.402 "data_offset": 2048, 00:14:48.402 "data_size": 63488 00:14:48.402 }, 00:14:48.402 { 00:14:48.402 "name": "BaseBdev2", 00:14:48.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.402 "is_configured": false, 00:14:48.402 "data_offset": 0, 00:14:48.402 "data_size": 0 00:14:48.402 } 00:14:48.402 ] 00:14:48.402 }' 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.402 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 [2024-11-04 14:47:18.724326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.969 [2024-11-04 14:47:18.724703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:48.969 [2024-11-04 14:47:18.724724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:48.969 [2024-11-04 14:47:18.725088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:48.969 BaseBdev2 00:14:48.969 [2024-11-04 14:47:18.725319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:48.969 [2024-11-04 14:47:18.725342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:48.969 [2024-11-04 14:47:18.725546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 [ 00:14:48.969 { 00:14:48.969 "name": "BaseBdev2", 00:14:48.969 "aliases": [ 00:14:48.969 "d65c1466-94c1-4427-9fa2-ff994d1e3d78" 00:14:48.969 ], 00:14:48.969 "product_name": "Malloc disk", 00:14:48.969 "block_size": 512, 00:14:48.969 "num_blocks": 65536, 00:14:48.969 "uuid": "d65c1466-94c1-4427-9fa2-ff994d1e3d78", 00:14:48.969 "assigned_rate_limits": { 00:14:48.969 "rw_ios_per_sec": 0, 00:14:48.969 "rw_mbytes_per_sec": 0, 00:14:48.969 "r_mbytes_per_sec": 0, 00:14:48.969 "w_mbytes_per_sec": 0 00:14:48.969 }, 00:14:48.969 "claimed": true, 00:14:48.969 "claim_type": "exclusive_write", 00:14:48.969 "zoned": false, 00:14:48.969 "supported_io_types": { 00:14:48.969 "read": true, 00:14:48.969 "write": true, 00:14:48.969 "unmap": true, 00:14:48.969 "flush": true, 00:14:48.969 "reset": true, 00:14:48.969 "nvme_admin": false, 00:14:48.969 "nvme_io": false, 00:14:48.969 "nvme_io_md": false, 00:14:48.969 "write_zeroes": true, 00:14:48.969 "zcopy": true, 00:14:48.969 "get_zone_info": false, 00:14:48.969 "zone_management": false, 00:14:48.969 "zone_append": false, 00:14:48.969 "compare": false, 00:14:48.969 "compare_and_write": false, 00:14:48.969 "abort": true, 00:14:48.969 "seek_hole": false, 00:14:48.969 "seek_data": false, 00:14:48.969 "copy": true, 00:14:48.969 "nvme_iov_md": false 00:14:48.969 }, 00:14:48.969 "memory_domains": [ 00:14:48.969 { 00:14:48.969 "dma_device_id": "system", 00:14:48.969 "dma_device_type": 1 00:14:48.969 }, 00:14:48.969 { 00:14:48.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.969 "dma_device_type": 2 00:14:48.969 } 00:14:48.969 ], 00:14:48.969 "driver_specific": {} 00:14:48.969 } 00:14:48.969 ] 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.969 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.970 "name": "Existed_Raid", 00:14:48.970 "uuid": "669bc924-7995-4b21-9013-adc52f43564e", 00:14:48.970 "strip_size_kb": 0, 00:14:48.970 "state": "online", 00:14:48.970 "raid_level": "raid1", 00:14:48.970 "superblock": true, 00:14:48.970 "num_base_bdevs": 2, 00:14:48.970 "num_base_bdevs_discovered": 2, 00:14:48.970 "num_base_bdevs_operational": 2, 00:14:48.970 "base_bdevs_list": [ 00:14:48.970 { 00:14:48.970 "name": "BaseBdev1", 00:14:48.970 "uuid": "669fc4a0-c6bb-4764-b875-d0b63206639c", 00:14:48.970 "is_configured": true, 00:14:48.970 "data_offset": 2048, 00:14:48.970 "data_size": 63488 00:14:48.970 }, 00:14:48.970 { 00:14:48.970 "name": "BaseBdev2", 00:14:48.970 "uuid": "d65c1466-94c1-4427-9fa2-ff994d1e3d78", 00:14:48.970 "is_configured": true, 00:14:48.970 "data_offset": 2048, 00:14:48.970 "data_size": 63488 00:14:48.970 } 00:14:48.970 ] 00:14:48.970 }' 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.970 14:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.536 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.536 [2024-11-04 14:47:19.236900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.537 "name": "Existed_Raid", 00:14:49.537 "aliases": [ 00:14:49.537 "669bc924-7995-4b21-9013-adc52f43564e" 00:14:49.537 ], 00:14:49.537 "product_name": "Raid Volume", 00:14:49.537 "block_size": 512, 00:14:49.537 "num_blocks": 63488, 00:14:49.537 "uuid": "669bc924-7995-4b21-9013-adc52f43564e", 00:14:49.537 "assigned_rate_limits": { 00:14:49.537 "rw_ios_per_sec": 0, 00:14:49.537 "rw_mbytes_per_sec": 0, 00:14:49.537 "r_mbytes_per_sec": 0, 00:14:49.537 "w_mbytes_per_sec": 0 00:14:49.537 }, 00:14:49.537 "claimed": false, 00:14:49.537 "zoned": false, 00:14:49.537 "supported_io_types": { 00:14:49.537 "read": true, 00:14:49.537 "write": true, 00:14:49.537 "unmap": false, 00:14:49.537 "flush": false, 00:14:49.537 "reset": true, 00:14:49.537 "nvme_admin": false, 00:14:49.537 "nvme_io": false, 00:14:49.537 "nvme_io_md": false, 00:14:49.537 "write_zeroes": true, 00:14:49.537 "zcopy": false, 00:14:49.537 "get_zone_info": false, 00:14:49.537 "zone_management": false, 00:14:49.537 "zone_append": false, 00:14:49.537 "compare": false, 00:14:49.537 "compare_and_write": false, 00:14:49.537 "abort": false, 00:14:49.537 "seek_hole": false, 00:14:49.537 "seek_data": false, 00:14:49.537 "copy": false, 00:14:49.537 "nvme_iov_md": false 00:14:49.537 }, 00:14:49.537 "memory_domains": [ 00:14:49.537 { 00:14:49.537 "dma_device_id": "system", 00:14:49.537 "dma_device_type": 1 00:14:49.537 }, 00:14:49.537 { 00:14:49.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.537 "dma_device_type": 2 00:14:49.537 }, 00:14:49.537 { 00:14:49.537 "dma_device_id": "system", 00:14:49.537 "dma_device_type": 1 00:14:49.537 }, 00:14:49.537 { 00:14:49.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.537 "dma_device_type": 2 00:14:49.537 } 00:14:49.537 ], 00:14:49.537 "driver_specific": { 00:14:49.537 "raid": { 00:14:49.537 "uuid": "669bc924-7995-4b21-9013-adc52f43564e", 00:14:49.537 "strip_size_kb": 0, 00:14:49.537 "state": "online", 00:14:49.537 "raid_level": "raid1", 00:14:49.537 "superblock": true, 00:14:49.537 "num_base_bdevs": 2, 00:14:49.537 "num_base_bdevs_discovered": 2, 00:14:49.537 "num_base_bdevs_operational": 2, 00:14:49.537 "base_bdevs_list": [ 00:14:49.537 { 00:14:49.537 "name": "BaseBdev1", 00:14:49.537 "uuid": "669fc4a0-c6bb-4764-b875-d0b63206639c", 00:14:49.537 "is_configured": true, 00:14:49.537 "data_offset": 2048, 00:14:49.537 "data_size": 63488 00:14:49.537 }, 00:14:49.537 { 00:14:49.537 "name": "BaseBdev2", 00:14:49.537 "uuid": "d65c1466-94c1-4427-9fa2-ff994d1e3d78", 00:14:49.537 "is_configured": true, 00:14:49.537 "data_offset": 2048, 00:14:49.537 "data_size": 63488 00:14:49.537 } 00:14:49.537 ] 00:14:49.537 } 00:14:49.537 } 00:14:49.537 }' 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:49.537 BaseBdev2' 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.537 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.795 [2024-11-04 14:47:19.528707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.795 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.795 "name": "Existed_Raid", 00:14:49.795 "uuid": "669bc924-7995-4b21-9013-adc52f43564e", 00:14:49.795 "strip_size_kb": 0, 00:14:49.795 "state": "online", 00:14:49.796 "raid_level": "raid1", 00:14:49.796 "superblock": true, 00:14:49.796 "num_base_bdevs": 2, 00:14:49.796 "num_base_bdevs_discovered": 1, 00:14:49.796 "num_base_bdevs_operational": 1, 00:14:49.796 "base_bdevs_list": [ 00:14:49.796 { 00:14:49.796 "name": null, 00:14:49.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.796 "is_configured": false, 00:14:49.796 "data_offset": 0, 00:14:49.796 "data_size": 63488 00:14:49.796 }, 00:14:49.796 { 00:14:49.796 "name": "BaseBdev2", 00:14:49.796 "uuid": "d65c1466-94c1-4427-9fa2-ff994d1e3d78", 00:14:49.796 "is_configured": true, 00:14:49.796 "data_offset": 2048, 00:14:49.796 "data_size": 63488 00:14:49.796 } 00:14:49.796 ] 00:14:49.796 }' 00:14:49.796 14:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.796 14:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.359 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.360 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.360 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:50.360 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.360 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.360 [2024-11-04 14:47:20.196438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.360 [2024-11-04 14:47:20.196598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.618 [2024-11-04 14:47:20.292605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.618 [2024-11-04 14:47:20.292698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.618 [2024-11-04 14:47:20.292720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63021 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63021 ']' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63021 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63021 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.618 killing process with pid 63021 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63021' 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63021 00:14:50.618 [2024-11-04 14:47:20.383029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.618 14:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63021 00:14:50.618 [2024-11-04 14:47:20.398709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.992 14:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:51.992 00:14:51.992 real 0m5.602s 00:14:51.992 user 0m8.323s 00:14:51.992 sys 0m0.860s 00:14:51.992 14:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.992 ************************************ 00:14:51.992 END TEST raid_state_function_test_sb 00:14:51.992 ************************************ 00:14:51.992 14:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.992 14:47:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:51.992 14:47:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:51.992 14:47:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.992 14:47:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.992 ************************************ 00:14:51.992 START TEST raid_superblock_test 00:14:51.992 ************************************ 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63280 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63280 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63280 ']' 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.992 14:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.992 [2024-11-04 14:47:21.708898] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:51.992 [2024-11-04 14:47:21.709076] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:14:52.250 [2024-11-04 14:47:21.888551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.250 [2024-11-04 14:47:22.033533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.508 [2024-11-04 14:47:22.261864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.508 [2024-11-04 14:47:22.261961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.072 malloc1 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.072 [2024-11-04 14:47:22.818847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.072 [2024-11-04 14:47:22.818954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.072 [2024-11-04 14:47:22.819014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:53.072 [2024-11-04 14:47:22.819048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.072 [2024-11-04 14:47:22.822616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.072 [2024-11-04 14:47:22.822669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.072 pt1 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.072 malloc2 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.072 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.072 [2024-11-04 14:47:22.875256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.073 [2024-11-04 14:47:22.875342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.073 [2024-11-04 14:47:22.875393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:53.073 [2024-11-04 14:47:22.875416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.073 [2024-11-04 14:47:22.878679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.073 [2024-11-04 14:47:22.878850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.073 pt2 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.073 [2024-11-04 14:47:22.883332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.073 [2024-11-04 14:47:22.886023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.073 [2024-11-04 14:47:22.886285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:53.073 [2024-11-04 14:47:22.886310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:53.073 [2024-11-04 14:47:22.886650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:53.073 [2024-11-04 14:47:22.886867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:53.073 [2024-11-04 14:47:22.886894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:53.073 [2024-11-04 14:47:22.887098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.073 "name": "raid_bdev1", 00:14:53.073 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:53.073 "strip_size_kb": 0, 00:14:53.073 "state": "online", 00:14:53.073 "raid_level": "raid1", 00:14:53.073 "superblock": true, 00:14:53.073 "num_base_bdevs": 2, 00:14:53.073 "num_base_bdevs_discovered": 2, 00:14:53.073 "num_base_bdevs_operational": 2, 00:14:53.073 "base_bdevs_list": [ 00:14:53.073 { 00:14:53.073 "name": "pt1", 00:14:53.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.073 "is_configured": true, 00:14:53.073 "data_offset": 2048, 00:14:53.073 "data_size": 63488 00:14:53.073 }, 00:14:53.073 { 00:14:53.073 "name": "pt2", 00:14:53.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.073 "is_configured": true, 00:14:53.073 "data_offset": 2048, 00:14:53.073 "data_size": 63488 00:14:53.073 } 00:14:53.073 ] 00:14:53.073 }' 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.073 14:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.639 [2024-11-04 14:47:23.411835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.639 "name": "raid_bdev1", 00:14:53.639 "aliases": [ 00:14:53.639 "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1" 00:14:53.639 ], 00:14:53.639 "product_name": "Raid Volume", 00:14:53.639 "block_size": 512, 00:14:53.639 "num_blocks": 63488, 00:14:53.639 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:53.639 "assigned_rate_limits": { 00:14:53.639 "rw_ios_per_sec": 0, 00:14:53.639 "rw_mbytes_per_sec": 0, 00:14:53.639 "r_mbytes_per_sec": 0, 00:14:53.639 "w_mbytes_per_sec": 0 00:14:53.639 }, 00:14:53.639 "claimed": false, 00:14:53.639 "zoned": false, 00:14:53.639 "supported_io_types": { 00:14:53.639 "read": true, 00:14:53.639 "write": true, 00:14:53.639 "unmap": false, 00:14:53.639 "flush": false, 00:14:53.639 "reset": true, 00:14:53.639 "nvme_admin": false, 00:14:53.639 "nvme_io": false, 00:14:53.639 "nvme_io_md": false, 00:14:53.639 "write_zeroes": true, 00:14:53.639 "zcopy": false, 00:14:53.639 "get_zone_info": false, 00:14:53.639 "zone_management": false, 00:14:53.639 "zone_append": false, 00:14:53.639 "compare": false, 00:14:53.639 "compare_and_write": false, 00:14:53.639 "abort": false, 00:14:53.639 "seek_hole": false, 00:14:53.639 "seek_data": false, 00:14:53.639 "copy": false, 00:14:53.639 "nvme_iov_md": false 00:14:53.639 }, 00:14:53.639 "memory_domains": [ 00:14:53.639 { 00:14:53.639 "dma_device_id": "system", 00:14:53.639 "dma_device_type": 1 00:14:53.639 }, 00:14:53.639 { 00:14:53.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.639 "dma_device_type": 2 00:14:53.639 }, 00:14:53.639 { 00:14:53.639 "dma_device_id": "system", 00:14:53.639 "dma_device_type": 1 00:14:53.639 }, 00:14:53.639 { 00:14:53.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.639 "dma_device_type": 2 00:14:53.639 } 00:14:53.639 ], 00:14:53.639 "driver_specific": { 00:14:53.639 "raid": { 00:14:53.639 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:53.639 "strip_size_kb": 0, 00:14:53.639 "state": "online", 00:14:53.639 "raid_level": "raid1", 00:14:53.639 "superblock": true, 00:14:53.639 "num_base_bdevs": 2, 00:14:53.639 "num_base_bdevs_discovered": 2, 00:14:53.639 "num_base_bdevs_operational": 2, 00:14:53.639 "base_bdevs_list": [ 00:14:53.639 { 00:14:53.639 "name": "pt1", 00:14:53.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.639 "is_configured": true, 00:14:53.639 "data_offset": 2048, 00:14:53.639 "data_size": 63488 00:14:53.639 }, 00:14:53.639 { 00:14:53.639 "name": "pt2", 00:14:53.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.639 "is_configured": true, 00:14:53.639 "data_offset": 2048, 00:14:53.639 "data_size": 63488 00:14:53.639 } 00:14:53.639 ] 00:14:53.639 } 00:14:53.639 } 00:14:53.639 }' 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:53.639 pt2' 00:14:53.639 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.897 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.898 [2024-11-04 14:47:23.711874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d5b5de2-f108-457b-ab35-e2d5c48cb7d1 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d5b5de2-f108-457b-ab35-e2d5c48cb7d1 ']' 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.898 [2024-11-04 14:47:23.767474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.898 [2024-11-04 14:47:23.767511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.898 [2024-11-04 14:47:23.767642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.898 [2024-11-04 14:47:23.767734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.898 [2024-11-04 14:47:23.767756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.898 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.156 [2024-11-04 14:47:23.899581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:54.156 [2024-11-04 14:47:23.902449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:54.156 [2024-11-04 14:47:23.902730] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:54.156 [2024-11-04 14:47:23.902862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:54.156 [2024-11-04 14:47:23.902904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.156 [2024-11-04 14:47:23.902929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:54.156 request: 00:14:54.156 { 00:14:54.156 "name": "raid_bdev1", 00:14:54.156 "raid_level": "raid1", 00:14:54.156 "base_bdevs": [ 00:14:54.156 "malloc1", 00:14:54.156 "malloc2" 00:14:54.156 ], 00:14:54.156 "superblock": false, 00:14:54.156 "method": "bdev_raid_create", 00:14:54.156 "req_id": 1 00:14:54.156 } 00:14:54.156 Got JSON-RPC error response 00:14:54.156 response: 00:14:54.156 { 00:14:54.156 "code": -17, 00:14:54.156 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:54.156 } 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.156 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.157 [2024-11-04 14:47:23.967705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:54.157 [2024-11-04 14:47:23.967955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.157 [2024-11-04 14:47:23.968129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:54.157 [2024-11-04 14:47:23.968296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.157 [2024-11-04 14:47:23.971788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.157 [2024-11-04 14:47:23.971961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:54.157 [2024-11-04 14:47:23.972342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:54.157 [2024-11-04 14:47:23.972619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.157 pt1 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.157 14:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.157 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.157 "name": "raid_bdev1", 00:14:54.157 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:54.157 "strip_size_kb": 0, 00:14:54.157 "state": "configuring", 00:14:54.157 "raid_level": "raid1", 00:14:54.157 "superblock": true, 00:14:54.157 "num_base_bdevs": 2, 00:14:54.157 "num_base_bdevs_discovered": 1, 00:14:54.157 "num_base_bdevs_operational": 2, 00:14:54.157 "base_bdevs_list": [ 00:14:54.157 { 00:14:54.157 "name": "pt1", 00:14:54.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.157 "is_configured": true, 00:14:54.157 "data_offset": 2048, 00:14:54.157 "data_size": 63488 00:14:54.157 }, 00:14:54.157 { 00:14:54.157 "name": null, 00:14:54.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.157 "is_configured": false, 00:14:54.157 "data_offset": 2048, 00:14:54.157 "data_size": 63488 00:14:54.157 } 00:14:54.157 ] 00:14:54.157 }' 00:14:54.157 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.157 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.723 [2024-11-04 14:47:24.456619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.723 [2024-11-04 14:47:24.456770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.723 [2024-11-04 14:47:24.456821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:54.723 [2024-11-04 14:47:24.456850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.723 [2024-11-04 14:47:24.457807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.723 [2024-11-04 14:47:24.457863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.723 [2024-11-04 14:47:24.458040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:54.723 [2024-11-04 14:47:24.458102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.723 [2024-11-04 14:47:24.458381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:54.723 [2024-11-04 14:47:24.458416] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.723 [2024-11-04 14:47:24.458872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:54.723 [2024-11-04 14:47:24.459186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:54.723 [2024-11-04 14:47:24.459211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:54.723 [2024-11-04 14:47:24.459430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.723 pt2 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.723 "name": "raid_bdev1", 00:14:54.723 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:54.723 "strip_size_kb": 0, 00:14:54.723 "state": "online", 00:14:54.723 "raid_level": "raid1", 00:14:54.723 "superblock": true, 00:14:54.723 "num_base_bdevs": 2, 00:14:54.723 "num_base_bdevs_discovered": 2, 00:14:54.723 "num_base_bdevs_operational": 2, 00:14:54.723 "base_bdevs_list": [ 00:14:54.723 { 00:14:54.723 "name": "pt1", 00:14:54.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.723 "is_configured": true, 00:14:54.723 "data_offset": 2048, 00:14:54.723 "data_size": 63488 00:14:54.723 }, 00:14:54.723 { 00:14:54.723 "name": "pt2", 00:14:54.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.723 "is_configured": true, 00:14:54.723 "data_offset": 2048, 00:14:54.723 "data_size": 63488 00:14:54.723 } 00:14:54.723 ] 00:14:54.723 }' 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.723 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.289 14:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.289 [2024-11-04 14:47:24.989033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.289 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.289 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.289 "name": "raid_bdev1", 00:14:55.289 "aliases": [ 00:14:55.289 "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1" 00:14:55.289 ], 00:14:55.289 "product_name": "Raid Volume", 00:14:55.289 "block_size": 512, 00:14:55.289 "num_blocks": 63488, 00:14:55.289 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:55.289 "assigned_rate_limits": { 00:14:55.289 "rw_ios_per_sec": 0, 00:14:55.289 "rw_mbytes_per_sec": 0, 00:14:55.289 "r_mbytes_per_sec": 0, 00:14:55.289 "w_mbytes_per_sec": 0 00:14:55.289 }, 00:14:55.289 "claimed": false, 00:14:55.289 "zoned": false, 00:14:55.289 "supported_io_types": { 00:14:55.289 "read": true, 00:14:55.289 "write": true, 00:14:55.289 "unmap": false, 00:14:55.289 "flush": false, 00:14:55.289 "reset": true, 00:14:55.289 "nvme_admin": false, 00:14:55.289 "nvme_io": false, 00:14:55.289 "nvme_io_md": false, 00:14:55.289 "write_zeroes": true, 00:14:55.289 "zcopy": false, 00:14:55.289 "get_zone_info": false, 00:14:55.289 "zone_management": false, 00:14:55.289 "zone_append": false, 00:14:55.289 "compare": false, 00:14:55.289 "compare_and_write": false, 00:14:55.289 "abort": false, 00:14:55.289 "seek_hole": false, 00:14:55.289 "seek_data": false, 00:14:55.289 "copy": false, 00:14:55.289 "nvme_iov_md": false 00:14:55.289 }, 00:14:55.289 "memory_domains": [ 00:14:55.289 { 00:14:55.289 "dma_device_id": "system", 00:14:55.289 "dma_device_type": 1 00:14:55.289 }, 00:14:55.289 { 00:14:55.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.289 "dma_device_type": 2 00:14:55.289 }, 00:14:55.289 { 00:14:55.289 "dma_device_id": "system", 00:14:55.289 "dma_device_type": 1 00:14:55.289 }, 00:14:55.289 { 00:14:55.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.289 "dma_device_type": 2 00:14:55.289 } 00:14:55.289 ], 00:14:55.289 "driver_specific": { 00:14:55.289 "raid": { 00:14:55.289 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:55.289 "strip_size_kb": 0, 00:14:55.289 "state": "online", 00:14:55.289 "raid_level": "raid1", 00:14:55.289 "superblock": true, 00:14:55.289 "num_base_bdevs": 2, 00:14:55.289 "num_base_bdevs_discovered": 2, 00:14:55.290 "num_base_bdevs_operational": 2, 00:14:55.290 "base_bdevs_list": [ 00:14:55.290 { 00:14:55.290 "name": "pt1", 00:14:55.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.290 "is_configured": true, 00:14:55.290 "data_offset": 2048, 00:14:55.290 "data_size": 63488 00:14:55.290 }, 00:14:55.290 { 00:14:55.290 "name": "pt2", 00:14:55.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.290 "is_configured": true, 00:14:55.290 "data_offset": 2048, 00:14:55.290 "data_size": 63488 00:14:55.290 } 00:14:55.290 ] 00:14:55.290 } 00:14:55.290 } 00:14:55.290 }' 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:55.290 pt2' 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.290 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.548 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.549 [2024-11-04 14:47:25.249068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d5b5de2-f108-457b-ab35-e2d5c48cb7d1 '!=' 4d5b5de2-f108-457b-ab35-e2d5c48cb7d1 ']' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.549 [2024-11-04 14:47:25.308865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.549 "name": "raid_bdev1", 00:14:55.549 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:55.549 "strip_size_kb": 0, 00:14:55.549 "state": "online", 00:14:55.549 "raid_level": "raid1", 00:14:55.549 "superblock": true, 00:14:55.549 "num_base_bdevs": 2, 00:14:55.549 "num_base_bdevs_discovered": 1, 00:14:55.549 "num_base_bdevs_operational": 1, 00:14:55.549 "base_bdevs_list": [ 00:14:55.549 { 00:14:55.549 "name": null, 00:14:55.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.549 "is_configured": false, 00:14:55.549 "data_offset": 0, 00:14:55.549 "data_size": 63488 00:14:55.549 }, 00:14:55.549 { 00:14:55.549 "name": "pt2", 00:14:55.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.549 "is_configured": true, 00:14:55.549 "data_offset": 2048, 00:14:55.549 "data_size": 63488 00:14:55.549 } 00:14:55.549 ] 00:14:55.549 }' 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.549 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.116 [2024-11-04 14:47:25.832944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.116 [2024-11-04 14:47:25.833215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.116 [2024-11-04 14:47:25.833514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.116 [2024-11-04 14:47:25.833700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.116 [2024-11-04 14:47:25.833735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.116 [2024-11-04 14:47:25.912932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.116 [2024-11-04 14:47:25.913065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.116 [2024-11-04 14:47:25.913098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:56.116 [2024-11-04 14:47:25.913116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.116 [2024-11-04 14:47:25.916438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.116 [2024-11-04 14:47:25.916491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.116 [2024-11-04 14:47:25.916621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:56.116 [2024-11-04 14:47:25.916700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.116 [2024-11-04 14:47:25.916844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:56.116 [2024-11-04 14:47:25.916867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.116 [2024-11-04 14:47:25.917175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:56.116 [2024-11-04 14:47:25.917428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:56.116 [2024-11-04 14:47:25.917446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:56.116 [2024-11-04 14:47:25.917681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.116 pt2 00:14:56.116 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.117 "name": "raid_bdev1", 00:14:56.117 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:56.117 "strip_size_kb": 0, 00:14:56.117 "state": "online", 00:14:56.117 "raid_level": "raid1", 00:14:56.117 "superblock": true, 00:14:56.117 "num_base_bdevs": 2, 00:14:56.117 "num_base_bdevs_discovered": 1, 00:14:56.117 "num_base_bdevs_operational": 1, 00:14:56.117 "base_bdevs_list": [ 00:14:56.117 { 00:14:56.117 "name": null, 00:14:56.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.117 "is_configured": false, 00:14:56.117 "data_offset": 2048, 00:14:56.117 "data_size": 63488 00:14:56.117 }, 00:14:56.117 { 00:14:56.117 "name": "pt2", 00:14:56.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.117 "is_configured": true, 00:14:56.117 "data_offset": 2048, 00:14:56.117 "data_size": 63488 00:14:56.117 } 00:14:56.117 ] 00:14:56.117 }' 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.117 14:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.684 [2024-11-04 14:47:26.429107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.684 [2024-11-04 14:47:26.429429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.684 [2024-11-04 14:47:26.429570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.684 [2024-11-04 14:47:26.429652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.684 [2024-11-04 14:47:26.429669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.684 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.685 [2024-11-04 14:47:26.489180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.685 [2024-11-04 14:47:26.489324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.685 [2024-11-04 14:47:26.489361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:56.685 [2024-11-04 14:47:26.489376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.685 [2024-11-04 14:47:26.492607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.685 [2024-11-04 14:47:26.492658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.685 [2024-11-04 14:47:26.492797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:56.685 [2024-11-04 14:47:26.492862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.685 [2024-11-04 14:47:26.493044] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:56.685 [2024-11-04 14:47:26.493063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.685 [2024-11-04 14:47:26.493088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:56.685 [2024-11-04 14:47:26.493163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.685 [2024-11-04 14:47:26.493358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:56.685 [2024-11-04 14:47:26.493376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.685 pt1 00:14:56.685 [2024-11-04 14:47:26.493734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:56.685 [2024-11-04 14:47:26.493927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:56.685 [2024-11-04 14:47:26.493948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:56.685 [2024-11-04 14:47:26.494140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.685 "name": "raid_bdev1", 00:14:56.685 "uuid": "4d5b5de2-f108-457b-ab35-e2d5c48cb7d1", 00:14:56.685 "strip_size_kb": 0, 00:14:56.685 "state": "online", 00:14:56.685 "raid_level": "raid1", 00:14:56.685 "superblock": true, 00:14:56.685 "num_base_bdevs": 2, 00:14:56.685 "num_base_bdevs_discovered": 1, 00:14:56.685 "num_base_bdevs_operational": 1, 00:14:56.685 "base_bdevs_list": [ 00:14:56.685 { 00:14:56.685 "name": null, 00:14:56.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.685 "is_configured": false, 00:14:56.685 "data_offset": 2048, 00:14:56.685 "data_size": 63488 00:14:56.685 }, 00:14:56.685 { 00:14:56.685 "name": "pt2", 00:14:56.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.685 "is_configured": true, 00:14:56.685 "data_offset": 2048, 00:14:56.685 "data_size": 63488 00:14:56.685 } 00:14:56.685 ] 00:14:56.685 }' 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.685 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:57.251 14:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:57.251 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.251 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 14:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.251 14:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:57.251 14:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.251 14:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:57.251 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.251 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.251 [2024-11-04 14:47:27.033646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4d5b5de2-f108-457b-ab35-e2d5c48cb7d1 '!=' 4d5b5de2-f108-457b-ab35-e2d5c48cb7d1 ']' 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63280 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63280 ']' 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63280 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63280 00:14:57.252 killing process with pid 63280 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63280' 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63280 00:14:57.252 14:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63280 00:14:57.252 [2024-11-04 14:47:27.116492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.252 [2024-11-04 14:47:27.116644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.252 [2024-11-04 14:47:27.116720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.252 [2024-11-04 14:47:27.116743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:57.510 [2024-11-04 14:47:27.321303] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.884 14:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:58.884 00:14:58.884 real 0m6.859s 00:14:58.884 user 0m10.737s 00:14:58.884 sys 0m1.051s 00:14:58.884 14:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.884 ************************************ 00:14:58.884 END TEST raid_superblock_test 00:14:58.884 ************************************ 00:14:58.884 14:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.884 14:47:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:14:58.884 14:47:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:58.884 14:47:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.884 14:47:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.884 ************************************ 00:14:58.884 START TEST raid_read_error_test 00:14:58.884 ************************************ 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.884 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2cvZG4zOck 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63616 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63616 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63616 ']' 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:58.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:58.885 14:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.885 [2024-11-04 14:47:28.624141] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:14:58.885 [2024-11-04 14:47:28.624342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63616 ] 00:14:59.143 [2024-11-04 14:47:28.808039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.143 [2024-11-04 14:47:28.970924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.401 [2024-11-04 14:47:29.197068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.401 [2024-11-04 14:47:29.197172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 BaseBdev1_malloc 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 true 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 [2024-11-04 14:47:29.673443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:59.967 [2024-11-04 14:47:29.673549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.967 [2024-11-04 14:47:29.673583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:59.967 [2024-11-04 14:47:29.673603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.967 [2024-11-04 14:47:29.676664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.967 [2024-11-04 14:47:29.676718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.967 BaseBdev1 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 BaseBdev2_malloc 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 true 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 [2024-11-04 14:47:29.734054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:59.967 [2024-11-04 14:47:29.734403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.967 [2024-11-04 14:47:29.734446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:59.967 [2024-11-04 14:47:29.734467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.967 [2024-11-04 14:47:29.737603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.967 [2024-11-04 14:47:29.737790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.967 BaseBdev2 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 [2024-11-04 14:47:29.742189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.967 [2024-11-04 14:47:29.744888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.967 [2024-11-04 14:47:29.745329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:59.967 [2024-11-04 14:47:29.745361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.967 [2024-11-04 14:47:29.745721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:59.967 [2024-11-04 14:47:29.745975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:59.967 [2024-11-04 14:47:29.745993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:59.967 [2024-11-04 14:47:29.746293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.967 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.967 "name": "raid_bdev1", 00:14:59.967 "uuid": "95dde9ae-3c1b-4d6d-8e1a-deb5c80c88af", 00:14:59.967 "strip_size_kb": 0, 00:14:59.967 "state": "online", 00:14:59.967 "raid_level": "raid1", 00:14:59.967 "superblock": true, 00:14:59.967 "num_base_bdevs": 2, 00:14:59.967 "num_base_bdevs_discovered": 2, 00:14:59.967 "num_base_bdevs_operational": 2, 00:14:59.967 "base_bdevs_list": [ 00:14:59.967 { 00:14:59.967 "name": "BaseBdev1", 00:14:59.967 "uuid": "2a352fea-4a0b-535a-a788-8a1adb626399", 00:14:59.967 "is_configured": true, 00:14:59.967 "data_offset": 2048, 00:14:59.967 "data_size": 63488 00:14:59.967 }, 00:14:59.967 { 00:14:59.967 "name": "BaseBdev2", 00:14:59.967 "uuid": "62eb15b5-dc93-52af-b5c5-2620dd160d69", 00:14:59.967 "is_configured": true, 00:14:59.967 "data_offset": 2048, 00:14:59.967 "data_size": 63488 00:14:59.968 } 00:14:59.968 ] 00:14:59.968 }' 00:14:59.968 14:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.968 14:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.534 14:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:00.534 14:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:00.534 [2024-11-04 14:47:30.376009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.470 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.471 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.471 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.471 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.471 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.471 "name": "raid_bdev1", 00:15:01.471 "uuid": "95dde9ae-3c1b-4d6d-8e1a-deb5c80c88af", 00:15:01.471 "strip_size_kb": 0, 00:15:01.471 "state": "online", 00:15:01.471 "raid_level": "raid1", 00:15:01.471 "superblock": true, 00:15:01.471 "num_base_bdevs": 2, 00:15:01.471 "num_base_bdevs_discovered": 2, 00:15:01.471 "num_base_bdevs_operational": 2, 00:15:01.471 "base_bdevs_list": [ 00:15:01.471 { 00:15:01.471 "name": "BaseBdev1", 00:15:01.471 "uuid": "2a352fea-4a0b-535a-a788-8a1adb626399", 00:15:01.471 "is_configured": true, 00:15:01.471 "data_offset": 2048, 00:15:01.471 "data_size": 63488 00:15:01.471 }, 00:15:01.471 { 00:15:01.471 "name": "BaseBdev2", 00:15:01.471 "uuid": "62eb15b5-dc93-52af-b5c5-2620dd160d69", 00:15:01.471 "is_configured": true, 00:15:01.471 "data_offset": 2048, 00:15:01.471 "data_size": 63488 00:15:01.471 } 00:15:01.471 ] 00:15:01.471 }' 00:15:01.471 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.471 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.038 [2024-11-04 14:47:31.778453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.038 [2024-11-04 14:47:31.778523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.038 [2024-11-04 14:47:31.782197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.038 [2024-11-04 14:47:31.782396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.038 [2024-11-04 14:47:31.782633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.038 [2024-11-04 14:47:31.782789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:02.038 { 00:15:02.038 "results": [ 00:15:02.038 { 00:15:02.038 "job": "raid_bdev1", 00:15:02.038 "core_mask": "0x1", 00:15:02.038 "workload": "randrw", 00:15:02.038 "percentage": 50, 00:15:02.038 "status": "finished", 00:15:02.038 "queue_depth": 1, 00:15:02.038 "io_size": 131072, 00:15:02.038 "runtime": 1.399527, 00:15:02.038 "iops": 10337.063879439267, 00:15:02.038 "mibps": 1292.1329849299084, 00:15:02.038 "io_failed": 0, 00:15:02.038 "io_timeout": 0, 00:15:02.038 "avg_latency_us": 92.60288480994363, 00:15:02.038 "min_latency_us": 43.52, 00:15:02.038 "max_latency_us": 1832.0290909090909 00:15:02.038 } 00:15:02.038 ], 00:15:02.038 "core_count": 1 00:15:02.038 } 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63616 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63616 ']' 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63616 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63616 00:15:02.038 killing process with pid 63616 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63616' 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63616 00:15:02.038 14:47:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63616 00:15:02.038 [2024-11-04 14:47:31.823338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.297 [2024-11-04 14:47:31.958629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2cvZG4zOck 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:03.672 ************************************ 00:15:03.672 END TEST raid_read_error_test 00:15:03.672 ************************************ 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:03.672 00:15:03.672 real 0m4.653s 00:15:03.672 user 0m5.729s 00:15:03.672 sys 0m0.618s 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:03.672 14:47:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.672 14:47:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:15:03.672 14:47:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:03.672 14:47:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:03.672 14:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.672 ************************************ 00:15:03.672 START TEST raid_write_error_test 00:15:03.672 ************************************ 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EuYyyXJ2B1 00:15:03.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63756 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63756 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63756 ']' 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.672 14:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.672 [2024-11-04 14:47:33.343151] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:03.672 [2024-11-04 14:47:33.344270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63756 ] 00:15:03.672 [2024-11-04 14:47:33.536922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.931 [2024-11-04 14:47:33.683731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.194 [2024-11-04 14:47:33.929820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.194 [2024-11-04 14:47:33.929925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.452 BaseBdev1_malloc 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.452 true 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.452 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.709 [2024-11-04 14:47:34.346260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:04.709 [2024-11-04 14:47:34.346552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.710 [2024-11-04 14:47:34.346599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:04.710 [2024-11-04 14:47:34.346630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.710 [2024-11-04 14:47:34.349822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.710 [2024-11-04 14:47:34.350008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.710 BaseBdev1 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.710 BaseBdev2_malloc 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.710 true 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.710 [2024-11-04 14:47:34.418664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:04.710 [2024-11-04 14:47:34.418751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.710 [2024-11-04 14:47:34.418786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:04.710 [2024-11-04 14:47:34.418806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.710 [2024-11-04 14:47:34.421992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.710 [2024-11-04 14:47:34.422045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.710 BaseBdev2 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.710 [2024-11-04 14:47:34.426911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.710 [2024-11-04 14:47:34.429679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.710 [2024-11-04 14:47:34.429985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:04.710 [2024-11-04 14:47:34.430009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:04.710 [2024-11-04 14:47:34.430376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:04.710 [2024-11-04 14:47:34.430657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:04.710 [2024-11-04 14:47:34.430676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:04.710 [2024-11-04 14:47:34.430968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.710 "name": "raid_bdev1", 00:15:04.710 "uuid": "ad4a1b43-65d4-4cb7-8818-536c46c84f97", 00:15:04.710 "strip_size_kb": 0, 00:15:04.710 "state": "online", 00:15:04.710 "raid_level": "raid1", 00:15:04.710 "superblock": true, 00:15:04.710 "num_base_bdevs": 2, 00:15:04.710 "num_base_bdevs_discovered": 2, 00:15:04.710 "num_base_bdevs_operational": 2, 00:15:04.710 "base_bdevs_list": [ 00:15:04.710 { 00:15:04.710 "name": "BaseBdev1", 00:15:04.710 "uuid": "633e9e1d-8a10-560e-9d66-327b34445790", 00:15:04.710 "is_configured": true, 00:15:04.710 "data_offset": 2048, 00:15:04.710 "data_size": 63488 00:15:04.710 }, 00:15:04.710 { 00:15:04.710 "name": "BaseBdev2", 00:15:04.710 "uuid": "ef256ece-48c6-5568-a50d-603c4dd7a4b9", 00:15:04.710 "is_configured": true, 00:15:04.710 "data_offset": 2048, 00:15:04.710 "data_size": 63488 00:15:04.710 } 00:15:04.710 ] 00:15:04.710 }' 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.710 14:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.277 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:05.277 14:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:05.277 [2024-11-04 14:47:35.088592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.210 [2024-11-04 14:47:35.950714] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:06.210 [2024-11-04 14:47:35.950827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.210 [2024-11-04 14:47:35.951079] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.210 14:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.210 14:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.210 "name": "raid_bdev1", 00:15:06.210 "uuid": "ad4a1b43-65d4-4cb7-8818-536c46c84f97", 00:15:06.210 "strip_size_kb": 0, 00:15:06.210 "state": "online", 00:15:06.210 "raid_level": "raid1", 00:15:06.210 "superblock": true, 00:15:06.210 "num_base_bdevs": 2, 00:15:06.210 "num_base_bdevs_discovered": 1, 00:15:06.210 "num_base_bdevs_operational": 1, 00:15:06.210 "base_bdevs_list": [ 00:15:06.210 { 00:15:06.210 "name": null, 00:15:06.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.210 "is_configured": false, 00:15:06.210 "data_offset": 0, 00:15:06.210 "data_size": 63488 00:15:06.210 }, 00:15:06.210 { 00:15:06.210 "name": "BaseBdev2", 00:15:06.210 "uuid": "ef256ece-48c6-5568-a50d-603c4dd7a4b9", 00:15:06.210 "is_configured": true, 00:15:06.210 "data_offset": 2048, 00:15:06.210 "data_size": 63488 00:15:06.210 } 00:15:06.210 ] 00:15:06.210 }' 00:15:06.210 14:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.210 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.776 14:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.776 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.776 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.776 [2024-11-04 14:47:36.482714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.776 [2024-11-04 14:47:36.482784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.776 [2024-11-04 14:47:36.486149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.776 { 00:15:06.776 "results": [ 00:15:06.776 { 00:15:06.776 "job": "raid_bdev1", 00:15:06.776 "core_mask": "0x1", 00:15:06.776 "workload": "randrw", 00:15:06.776 "percentage": 50, 00:15:06.776 "status": "finished", 00:15:06.776 "queue_depth": 1, 00:15:06.776 "io_size": 131072, 00:15:06.776 "runtime": 1.391423, 00:15:06.776 "iops": 12216.9893698753, 00:15:06.777 "mibps": 1527.1236712344125, 00:15:06.777 "io_failed": 0, 00:15:06.777 "io_timeout": 0, 00:15:06.777 "avg_latency_us": 77.72458401296332, 00:15:06.777 "min_latency_us": 42.589090909090906, 00:15:06.777 "max_latency_us": 1839.4763636363637 00:15:06.777 } 00:15:06.777 ], 00:15:06.777 "core_count": 1 00:15:06.777 } 00:15:06.777 [2024-11-04 14:47:36.486551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.777 [2024-11-04 14:47:36.486756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.777 [2024-11-04 14:47:36.486780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63756 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63756 ']' 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63756 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63756 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:06.777 killing process with pid 63756 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63756' 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63756 00:15:06.777 14:47:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63756 00:15:06.777 [2024-11-04 14:47:36.523748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.777 [2024-11-04 14:47:36.659435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EuYyyXJ2B1 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:08.199 00:15:08.199 real 0m4.654s 00:15:08.199 user 0m5.707s 00:15:08.199 sys 0m0.649s 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:08.199 ************************************ 00:15:08.199 END TEST raid_write_error_test 00:15:08.199 ************************************ 00:15:08.199 14:47:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.199 14:47:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:15:08.199 14:47:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:08.199 14:47:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:08.199 14:47:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:08.199 14:47:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:08.199 14:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.199 ************************************ 00:15:08.199 START TEST raid_state_function_test 00:15:08.199 ************************************ 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:08.199 Process raid pid: 63905 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63905 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63905' 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63905 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63905 ']' 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:08.199 14:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.199 [2024-11-04 14:47:38.030333] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:08.200 [2024-11-04 14:47:38.030776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.458 [2024-11-04 14:47:38.219546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.716 [2024-11-04 14:47:38.393937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.974 [2024-11-04 14:47:38.628612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.974 [2024-11-04 14:47:38.628681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 [2024-11-04 14:47:39.099687] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.232 [2024-11-04 14:47:39.099780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.232 [2024-11-04 14:47:39.099798] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.232 [2024-11-04 14:47:39.099815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.232 [2024-11-04 14:47:39.099826] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.232 [2024-11-04 14:47:39.099841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:09.232 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.233 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.491 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.491 "name": "Existed_Raid", 00:15:09.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.491 "strip_size_kb": 64, 00:15:09.491 "state": "configuring", 00:15:09.491 "raid_level": "raid0", 00:15:09.491 "superblock": false, 00:15:09.491 "num_base_bdevs": 3, 00:15:09.491 "num_base_bdevs_discovered": 0, 00:15:09.491 "num_base_bdevs_operational": 3, 00:15:09.491 "base_bdevs_list": [ 00:15:09.491 { 00:15:09.491 "name": "BaseBdev1", 00:15:09.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.491 "is_configured": false, 00:15:09.491 "data_offset": 0, 00:15:09.491 "data_size": 0 00:15:09.491 }, 00:15:09.491 { 00:15:09.491 "name": "BaseBdev2", 00:15:09.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.491 "is_configured": false, 00:15:09.491 "data_offset": 0, 00:15:09.491 "data_size": 0 00:15:09.491 }, 00:15:09.491 { 00:15:09.491 "name": "BaseBdev3", 00:15:09.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.491 "is_configured": false, 00:15:09.491 "data_offset": 0, 00:15:09.491 "data_size": 0 00:15:09.491 } 00:15:09.491 ] 00:15:09.491 }' 00:15:09.491 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.491 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.056 [2024-11-04 14:47:39.667780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.056 [2024-11-04 14:47:39.667836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.056 [2024-11-04 14:47:39.675779] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.056 [2024-11-04 14:47:39.676051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.056 [2024-11-04 14:47:39.676216] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.056 [2024-11-04 14:47:39.676379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.056 [2024-11-04 14:47:39.676488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.056 [2024-11-04 14:47:39.676546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.056 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.056 [2024-11-04 14:47:39.729363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.056 BaseBdev1 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 [ 00:15:10.057 { 00:15:10.057 "name": "BaseBdev1", 00:15:10.057 "aliases": [ 00:15:10.057 "0efd3297-4fe9-4930-be59-a48af9aec92e" 00:15:10.057 ], 00:15:10.057 "product_name": "Malloc disk", 00:15:10.057 "block_size": 512, 00:15:10.057 "num_blocks": 65536, 00:15:10.057 "uuid": "0efd3297-4fe9-4930-be59-a48af9aec92e", 00:15:10.057 "assigned_rate_limits": { 00:15:10.057 "rw_ios_per_sec": 0, 00:15:10.057 "rw_mbytes_per_sec": 0, 00:15:10.057 "r_mbytes_per_sec": 0, 00:15:10.057 "w_mbytes_per_sec": 0 00:15:10.057 }, 00:15:10.057 "claimed": true, 00:15:10.057 "claim_type": "exclusive_write", 00:15:10.057 "zoned": false, 00:15:10.057 "supported_io_types": { 00:15:10.057 "read": true, 00:15:10.057 "write": true, 00:15:10.057 "unmap": true, 00:15:10.057 "flush": true, 00:15:10.057 "reset": true, 00:15:10.057 "nvme_admin": false, 00:15:10.057 "nvme_io": false, 00:15:10.057 "nvme_io_md": false, 00:15:10.057 "write_zeroes": true, 00:15:10.057 "zcopy": true, 00:15:10.057 "get_zone_info": false, 00:15:10.057 "zone_management": false, 00:15:10.057 "zone_append": false, 00:15:10.057 "compare": false, 00:15:10.057 "compare_and_write": false, 00:15:10.057 "abort": true, 00:15:10.057 "seek_hole": false, 00:15:10.057 "seek_data": false, 00:15:10.057 "copy": true, 00:15:10.057 "nvme_iov_md": false 00:15:10.057 }, 00:15:10.057 "memory_domains": [ 00:15:10.057 { 00:15:10.057 "dma_device_id": "system", 00:15:10.057 "dma_device_type": 1 00:15:10.057 }, 00:15:10.057 { 00:15:10.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.057 "dma_device_type": 2 00:15:10.057 } 00:15:10.057 ], 00:15:10.057 "driver_specific": {} 00:15:10.057 } 00:15:10.057 ] 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.057 "name": "Existed_Raid", 00:15:10.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.057 "strip_size_kb": 64, 00:15:10.057 "state": "configuring", 00:15:10.057 "raid_level": "raid0", 00:15:10.057 "superblock": false, 00:15:10.057 "num_base_bdevs": 3, 00:15:10.057 "num_base_bdevs_discovered": 1, 00:15:10.057 "num_base_bdevs_operational": 3, 00:15:10.057 "base_bdevs_list": [ 00:15:10.057 { 00:15:10.057 "name": "BaseBdev1", 00:15:10.057 "uuid": "0efd3297-4fe9-4930-be59-a48af9aec92e", 00:15:10.057 "is_configured": true, 00:15:10.057 "data_offset": 0, 00:15:10.057 "data_size": 65536 00:15:10.057 }, 00:15:10.057 { 00:15:10.057 "name": "BaseBdev2", 00:15:10.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.057 "is_configured": false, 00:15:10.057 "data_offset": 0, 00:15:10.057 "data_size": 0 00:15:10.057 }, 00:15:10.057 { 00:15:10.057 "name": "BaseBdev3", 00:15:10.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.057 "is_configured": false, 00:15:10.057 "data_offset": 0, 00:15:10.057 "data_size": 0 00:15:10.057 } 00:15:10.057 ] 00:15:10.057 }' 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.057 14:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.623 [2024-11-04 14:47:40.265638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.623 [2024-11-04 14:47:40.265977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.623 [2024-11-04 14:47:40.273684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.623 [2024-11-04 14:47:40.276321] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.623 [2024-11-04 14:47:40.276515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.623 [2024-11-04 14:47:40.276544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.623 [2024-11-04 14:47:40.276561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.623 "name": "Existed_Raid", 00:15:10.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.623 "strip_size_kb": 64, 00:15:10.623 "state": "configuring", 00:15:10.623 "raid_level": "raid0", 00:15:10.623 "superblock": false, 00:15:10.623 "num_base_bdevs": 3, 00:15:10.623 "num_base_bdevs_discovered": 1, 00:15:10.623 "num_base_bdevs_operational": 3, 00:15:10.623 "base_bdevs_list": [ 00:15:10.623 { 00:15:10.623 "name": "BaseBdev1", 00:15:10.623 "uuid": "0efd3297-4fe9-4930-be59-a48af9aec92e", 00:15:10.623 "is_configured": true, 00:15:10.623 "data_offset": 0, 00:15:10.623 "data_size": 65536 00:15:10.623 }, 00:15:10.623 { 00:15:10.623 "name": "BaseBdev2", 00:15:10.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.623 "is_configured": false, 00:15:10.623 "data_offset": 0, 00:15:10.623 "data_size": 0 00:15:10.623 }, 00:15:10.623 { 00:15:10.623 "name": "BaseBdev3", 00:15:10.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.623 "is_configured": false, 00:15:10.623 "data_offset": 0, 00:15:10.623 "data_size": 0 00:15:10.623 } 00:15:10.623 ] 00:15:10.623 }' 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.623 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.190 [2024-11-04 14:47:40.860497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.190 BaseBdev2 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.190 [ 00:15:11.190 { 00:15:11.190 "name": "BaseBdev2", 00:15:11.190 "aliases": [ 00:15:11.190 "6c51ad48-64b8-40f8-a899-210640d68811" 00:15:11.190 ], 00:15:11.190 "product_name": "Malloc disk", 00:15:11.190 "block_size": 512, 00:15:11.190 "num_blocks": 65536, 00:15:11.190 "uuid": "6c51ad48-64b8-40f8-a899-210640d68811", 00:15:11.190 "assigned_rate_limits": { 00:15:11.190 "rw_ios_per_sec": 0, 00:15:11.190 "rw_mbytes_per_sec": 0, 00:15:11.190 "r_mbytes_per_sec": 0, 00:15:11.190 "w_mbytes_per_sec": 0 00:15:11.190 }, 00:15:11.190 "claimed": true, 00:15:11.190 "claim_type": "exclusive_write", 00:15:11.190 "zoned": false, 00:15:11.190 "supported_io_types": { 00:15:11.190 "read": true, 00:15:11.190 "write": true, 00:15:11.190 "unmap": true, 00:15:11.190 "flush": true, 00:15:11.190 "reset": true, 00:15:11.190 "nvme_admin": false, 00:15:11.190 "nvme_io": false, 00:15:11.190 "nvme_io_md": false, 00:15:11.190 "write_zeroes": true, 00:15:11.190 "zcopy": true, 00:15:11.190 "get_zone_info": false, 00:15:11.190 "zone_management": false, 00:15:11.190 "zone_append": false, 00:15:11.190 "compare": false, 00:15:11.190 "compare_and_write": false, 00:15:11.190 "abort": true, 00:15:11.190 "seek_hole": false, 00:15:11.190 "seek_data": false, 00:15:11.190 "copy": true, 00:15:11.190 "nvme_iov_md": false 00:15:11.190 }, 00:15:11.190 "memory_domains": [ 00:15:11.190 { 00:15:11.190 "dma_device_id": "system", 00:15:11.190 "dma_device_type": 1 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.190 "dma_device_type": 2 00:15:11.190 } 00:15:11.190 ], 00:15:11.190 "driver_specific": {} 00:15:11.190 } 00:15:11.190 ] 00:15:11.190 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.191 "name": "Existed_Raid", 00:15:11.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.191 "strip_size_kb": 64, 00:15:11.191 "state": "configuring", 00:15:11.191 "raid_level": "raid0", 00:15:11.191 "superblock": false, 00:15:11.191 "num_base_bdevs": 3, 00:15:11.191 "num_base_bdevs_discovered": 2, 00:15:11.191 "num_base_bdevs_operational": 3, 00:15:11.191 "base_bdevs_list": [ 00:15:11.191 { 00:15:11.191 "name": "BaseBdev1", 00:15:11.191 "uuid": "0efd3297-4fe9-4930-be59-a48af9aec92e", 00:15:11.191 "is_configured": true, 00:15:11.191 "data_offset": 0, 00:15:11.191 "data_size": 65536 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "name": "BaseBdev2", 00:15:11.191 "uuid": "6c51ad48-64b8-40f8-a899-210640d68811", 00:15:11.191 "is_configured": true, 00:15:11.191 "data_offset": 0, 00:15:11.191 "data_size": 65536 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "name": "BaseBdev3", 00:15:11.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.191 "is_configured": false, 00:15:11.191 "data_offset": 0, 00:15:11.191 "data_size": 0 00:15:11.191 } 00:15:11.191 ] 00:15:11.191 }' 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.191 14:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.757 [2024-11-04 14:47:41.504327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.757 [2024-11-04 14:47:41.504654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:11.757 [2024-11-04 14:47:41.504731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:11.757 [2024-11-04 14:47:41.505240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:11.757 [2024-11-04 14:47:41.505494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:11.757 [2024-11-04 14:47:41.505511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:11.757 [2024-11-04 14:47:41.505885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.757 BaseBdev3 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.757 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.757 [ 00:15:11.757 { 00:15:11.757 "name": "BaseBdev3", 00:15:11.757 "aliases": [ 00:15:11.757 "dfaea7ee-bae0-40a0-80af-d617b775b841" 00:15:11.757 ], 00:15:11.757 "product_name": "Malloc disk", 00:15:11.757 "block_size": 512, 00:15:11.757 "num_blocks": 65536, 00:15:11.757 "uuid": "dfaea7ee-bae0-40a0-80af-d617b775b841", 00:15:11.757 "assigned_rate_limits": { 00:15:11.757 "rw_ios_per_sec": 0, 00:15:11.757 "rw_mbytes_per_sec": 0, 00:15:11.757 "r_mbytes_per_sec": 0, 00:15:11.757 "w_mbytes_per_sec": 0 00:15:11.757 }, 00:15:11.757 "claimed": true, 00:15:11.757 "claim_type": "exclusive_write", 00:15:11.757 "zoned": false, 00:15:11.757 "supported_io_types": { 00:15:11.758 "read": true, 00:15:11.758 "write": true, 00:15:11.758 "unmap": true, 00:15:11.758 "flush": true, 00:15:11.758 "reset": true, 00:15:11.758 "nvme_admin": false, 00:15:11.758 "nvme_io": false, 00:15:11.758 "nvme_io_md": false, 00:15:11.758 "write_zeroes": true, 00:15:11.758 "zcopy": true, 00:15:11.758 "get_zone_info": false, 00:15:11.758 "zone_management": false, 00:15:11.758 "zone_append": false, 00:15:11.758 "compare": false, 00:15:11.758 "compare_and_write": false, 00:15:11.758 "abort": true, 00:15:11.758 "seek_hole": false, 00:15:11.758 "seek_data": false, 00:15:11.758 "copy": true, 00:15:11.758 "nvme_iov_md": false 00:15:11.758 }, 00:15:11.758 "memory_domains": [ 00:15:11.758 { 00:15:11.758 "dma_device_id": "system", 00:15:11.758 "dma_device_type": 1 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.758 "dma_device_type": 2 00:15:11.758 } 00:15:11.758 ], 00:15:11.758 "driver_specific": {} 00:15:11.758 } 00:15:11.758 ] 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.758 "name": "Existed_Raid", 00:15:11.758 "uuid": "7aa71435-bd4a-4096-adb9-f63dc70dd744", 00:15:11.758 "strip_size_kb": 64, 00:15:11.758 "state": "online", 00:15:11.758 "raid_level": "raid0", 00:15:11.758 "superblock": false, 00:15:11.758 "num_base_bdevs": 3, 00:15:11.758 "num_base_bdevs_discovered": 3, 00:15:11.758 "num_base_bdevs_operational": 3, 00:15:11.758 "base_bdevs_list": [ 00:15:11.758 { 00:15:11.758 "name": "BaseBdev1", 00:15:11.758 "uuid": "0efd3297-4fe9-4930-be59-a48af9aec92e", 00:15:11.758 "is_configured": true, 00:15:11.758 "data_offset": 0, 00:15:11.758 "data_size": 65536 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "name": "BaseBdev2", 00:15:11.758 "uuid": "6c51ad48-64b8-40f8-a899-210640d68811", 00:15:11.758 "is_configured": true, 00:15:11.758 "data_offset": 0, 00:15:11.758 "data_size": 65536 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "name": "BaseBdev3", 00:15:11.758 "uuid": "dfaea7ee-bae0-40a0-80af-d617b775b841", 00:15:11.758 "is_configured": true, 00:15:11.758 "data_offset": 0, 00:15:11.758 "data_size": 65536 00:15:11.758 } 00:15:11.758 ] 00:15:11.758 }' 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.758 14:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.325 [2024-11-04 14:47:42.096984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.325 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:12.325 "name": "Existed_Raid", 00:15:12.325 "aliases": [ 00:15:12.325 "7aa71435-bd4a-4096-adb9-f63dc70dd744" 00:15:12.325 ], 00:15:12.325 "product_name": "Raid Volume", 00:15:12.325 "block_size": 512, 00:15:12.325 "num_blocks": 196608, 00:15:12.325 "uuid": "7aa71435-bd4a-4096-adb9-f63dc70dd744", 00:15:12.325 "assigned_rate_limits": { 00:15:12.325 "rw_ios_per_sec": 0, 00:15:12.325 "rw_mbytes_per_sec": 0, 00:15:12.325 "r_mbytes_per_sec": 0, 00:15:12.325 "w_mbytes_per_sec": 0 00:15:12.325 }, 00:15:12.325 "claimed": false, 00:15:12.325 "zoned": false, 00:15:12.325 "supported_io_types": { 00:15:12.325 "read": true, 00:15:12.325 "write": true, 00:15:12.325 "unmap": true, 00:15:12.325 "flush": true, 00:15:12.325 "reset": true, 00:15:12.325 "nvme_admin": false, 00:15:12.325 "nvme_io": false, 00:15:12.325 "nvme_io_md": false, 00:15:12.325 "write_zeroes": true, 00:15:12.325 "zcopy": false, 00:15:12.325 "get_zone_info": false, 00:15:12.325 "zone_management": false, 00:15:12.325 "zone_append": false, 00:15:12.325 "compare": false, 00:15:12.325 "compare_and_write": false, 00:15:12.325 "abort": false, 00:15:12.325 "seek_hole": false, 00:15:12.326 "seek_data": false, 00:15:12.326 "copy": false, 00:15:12.326 "nvme_iov_md": false 00:15:12.326 }, 00:15:12.326 "memory_domains": [ 00:15:12.326 { 00:15:12.326 "dma_device_id": "system", 00:15:12.326 "dma_device_type": 1 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.326 "dma_device_type": 2 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "dma_device_id": "system", 00:15:12.326 "dma_device_type": 1 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.326 "dma_device_type": 2 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "dma_device_id": "system", 00:15:12.326 "dma_device_type": 1 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.326 "dma_device_type": 2 00:15:12.326 } 00:15:12.326 ], 00:15:12.326 "driver_specific": { 00:15:12.326 "raid": { 00:15:12.326 "uuid": "7aa71435-bd4a-4096-adb9-f63dc70dd744", 00:15:12.326 "strip_size_kb": 64, 00:15:12.326 "state": "online", 00:15:12.326 "raid_level": "raid0", 00:15:12.326 "superblock": false, 00:15:12.326 "num_base_bdevs": 3, 00:15:12.326 "num_base_bdevs_discovered": 3, 00:15:12.326 "num_base_bdevs_operational": 3, 00:15:12.326 "base_bdevs_list": [ 00:15:12.326 { 00:15:12.326 "name": "BaseBdev1", 00:15:12.326 "uuid": "0efd3297-4fe9-4930-be59-a48af9aec92e", 00:15:12.326 "is_configured": true, 00:15:12.326 "data_offset": 0, 00:15:12.326 "data_size": 65536 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "name": "BaseBdev2", 00:15:12.326 "uuid": "6c51ad48-64b8-40f8-a899-210640d68811", 00:15:12.326 "is_configured": true, 00:15:12.326 "data_offset": 0, 00:15:12.326 "data_size": 65536 00:15:12.326 }, 00:15:12.326 { 00:15:12.326 "name": "BaseBdev3", 00:15:12.326 "uuid": "dfaea7ee-bae0-40a0-80af-d617b775b841", 00:15:12.326 "is_configured": true, 00:15:12.326 "data_offset": 0, 00:15:12.326 "data_size": 65536 00:15:12.326 } 00:15:12.326 ] 00:15:12.326 } 00:15:12.326 } 00:15:12.326 }' 00:15:12.326 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.326 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:12.326 BaseBdev2 00:15:12.326 BaseBdev3' 00:15:12.326 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.584 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.584 [2024-11-04 14:47:42.392782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.584 [2024-11-04 14:47:42.392826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.584 [2024-11-04 14:47:42.392910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.844 "name": "Existed_Raid", 00:15:12.844 "uuid": "7aa71435-bd4a-4096-adb9-f63dc70dd744", 00:15:12.844 "strip_size_kb": 64, 00:15:12.844 "state": "offline", 00:15:12.844 "raid_level": "raid0", 00:15:12.844 "superblock": false, 00:15:12.844 "num_base_bdevs": 3, 00:15:12.844 "num_base_bdevs_discovered": 2, 00:15:12.844 "num_base_bdevs_operational": 2, 00:15:12.844 "base_bdevs_list": [ 00:15:12.844 { 00:15:12.844 "name": null, 00:15:12.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.844 "is_configured": false, 00:15:12.844 "data_offset": 0, 00:15:12.844 "data_size": 65536 00:15:12.844 }, 00:15:12.844 { 00:15:12.844 "name": "BaseBdev2", 00:15:12.844 "uuid": "6c51ad48-64b8-40f8-a899-210640d68811", 00:15:12.844 "is_configured": true, 00:15:12.844 "data_offset": 0, 00:15:12.844 "data_size": 65536 00:15:12.844 }, 00:15:12.844 { 00:15:12.844 "name": "BaseBdev3", 00:15:12.844 "uuid": "dfaea7ee-bae0-40a0-80af-d617b775b841", 00:15:12.844 "is_configured": true, 00:15:12.844 "data_offset": 0, 00:15:12.844 "data_size": 65536 00:15:12.844 } 00:15:12.844 ] 00:15:12.844 }' 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.844 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.103 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:13.361 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.361 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.361 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.361 14:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.361 14:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.361 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.362 [2024-11-04 14:47:43.055950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.362 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.362 [2024-11-04 14:47:43.210258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:13.362 [2024-11-04 14:47:43.210485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.620 BaseBdev2 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:13.620 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 [ 00:15:13.621 { 00:15:13.621 "name": "BaseBdev2", 00:15:13.621 "aliases": [ 00:15:13.621 "555759dd-8016-407b-8a4f-e3367a5d0dc0" 00:15:13.621 ], 00:15:13.621 "product_name": "Malloc disk", 00:15:13.621 "block_size": 512, 00:15:13.621 "num_blocks": 65536, 00:15:13.621 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:13.621 "assigned_rate_limits": { 00:15:13.621 "rw_ios_per_sec": 0, 00:15:13.621 "rw_mbytes_per_sec": 0, 00:15:13.621 "r_mbytes_per_sec": 0, 00:15:13.621 "w_mbytes_per_sec": 0 00:15:13.621 }, 00:15:13.621 "claimed": false, 00:15:13.621 "zoned": false, 00:15:13.621 "supported_io_types": { 00:15:13.621 "read": true, 00:15:13.621 "write": true, 00:15:13.621 "unmap": true, 00:15:13.621 "flush": true, 00:15:13.621 "reset": true, 00:15:13.621 "nvme_admin": false, 00:15:13.621 "nvme_io": false, 00:15:13.621 "nvme_io_md": false, 00:15:13.621 "write_zeroes": true, 00:15:13.621 "zcopy": true, 00:15:13.621 "get_zone_info": false, 00:15:13.621 "zone_management": false, 00:15:13.621 "zone_append": false, 00:15:13.621 "compare": false, 00:15:13.621 "compare_and_write": false, 00:15:13.621 "abort": true, 00:15:13.621 "seek_hole": false, 00:15:13.621 "seek_data": false, 00:15:13.621 "copy": true, 00:15:13.621 "nvme_iov_md": false 00:15:13.621 }, 00:15:13.621 "memory_domains": [ 00:15:13.621 { 00:15:13.621 "dma_device_id": "system", 00:15:13.621 "dma_device_type": 1 00:15:13.621 }, 00:15:13.621 { 00:15:13.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.621 "dma_device_type": 2 00:15:13.621 } 00:15:13.621 ], 00:15:13.621 "driver_specific": {} 00:15:13.621 } 00:15:13.621 ] 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 BaseBdev3 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.621 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 [ 00:15:13.621 { 00:15:13.621 "name": "BaseBdev3", 00:15:13.621 "aliases": [ 00:15:13.621 "78b256bc-8512-4c88-b718-b8d25d615d43" 00:15:13.621 ], 00:15:13.621 "product_name": "Malloc disk", 00:15:13.621 "block_size": 512, 00:15:13.621 "num_blocks": 65536, 00:15:13.621 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:13.621 "assigned_rate_limits": { 00:15:13.621 "rw_ios_per_sec": 0, 00:15:13.621 "rw_mbytes_per_sec": 0, 00:15:13.621 "r_mbytes_per_sec": 0, 00:15:13.621 "w_mbytes_per_sec": 0 00:15:13.621 }, 00:15:13.621 "claimed": false, 00:15:13.621 "zoned": false, 00:15:13.621 "supported_io_types": { 00:15:13.621 "read": true, 00:15:13.621 "write": true, 00:15:13.621 "unmap": true, 00:15:13.621 "flush": true, 00:15:13.621 "reset": true, 00:15:13.621 "nvme_admin": false, 00:15:13.621 "nvme_io": false, 00:15:13.621 "nvme_io_md": false, 00:15:13.621 "write_zeroes": true, 00:15:13.621 "zcopy": true, 00:15:13.621 "get_zone_info": false, 00:15:13.621 "zone_management": false, 00:15:13.621 "zone_append": false, 00:15:13.621 "compare": false, 00:15:13.621 "compare_and_write": false, 00:15:13.621 "abort": true, 00:15:13.621 "seek_hole": false, 00:15:13.621 "seek_data": false, 00:15:13.621 "copy": true, 00:15:13.621 "nvme_iov_md": false 00:15:13.621 }, 00:15:13.621 "memory_domains": [ 00:15:13.621 { 00:15:13.621 "dma_device_id": "system", 00:15:13.621 "dma_device_type": 1 00:15:13.621 }, 00:15:13.621 { 00:15:13.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.621 "dma_device_type": 2 00:15:13.621 } 00:15:13.621 ], 00:15:13.621 "driver_specific": {} 00:15:13.621 } 00:15:13.621 ] 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.879 [2024-11-04 14:47:43.516554] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.879 [2024-11-04 14:47:43.516627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.879 [2024-11-04 14:47:43.516672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.879 [2024-11-04 14:47:43.519328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.879 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.880 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.880 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.880 "name": "Existed_Raid", 00:15:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.880 "strip_size_kb": 64, 00:15:13.880 "state": "configuring", 00:15:13.880 "raid_level": "raid0", 00:15:13.880 "superblock": false, 00:15:13.880 "num_base_bdevs": 3, 00:15:13.880 "num_base_bdevs_discovered": 2, 00:15:13.880 "num_base_bdevs_operational": 3, 00:15:13.880 "base_bdevs_list": [ 00:15:13.880 { 00:15:13.880 "name": "BaseBdev1", 00:15:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.880 "is_configured": false, 00:15:13.880 "data_offset": 0, 00:15:13.880 "data_size": 0 00:15:13.880 }, 00:15:13.880 { 00:15:13.880 "name": "BaseBdev2", 00:15:13.880 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:13.880 "is_configured": true, 00:15:13.880 "data_offset": 0, 00:15:13.880 "data_size": 65536 00:15:13.880 }, 00:15:13.880 { 00:15:13.880 "name": "BaseBdev3", 00:15:13.880 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:13.880 "is_configured": true, 00:15:13.880 "data_offset": 0, 00:15:13.880 "data_size": 65536 00:15:13.880 } 00:15:13.880 ] 00:15:13.880 }' 00:15:13.880 14:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.880 14:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.446 [2024-11-04 14:47:44.045201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.446 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.447 "name": "Existed_Raid", 00:15:14.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.447 "strip_size_kb": 64, 00:15:14.447 "state": "configuring", 00:15:14.447 "raid_level": "raid0", 00:15:14.447 "superblock": false, 00:15:14.447 "num_base_bdevs": 3, 00:15:14.447 "num_base_bdevs_discovered": 1, 00:15:14.447 "num_base_bdevs_operational": 3, 00:15:14.447 "base_bdevs_list": [ 00:15:14.447 { 00:15:14.447 "name": "BaseBdev1", 00:15:14.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.447 "is_configured": false, 00:15:14.447 "data_offset": 0, 00:15:14.447 "data_size": 0 00:15:14.447 }, 00:15:14.447 { 00:15:14.447 "name": null, 00:15:14.447 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:14.447 "is_configured": false, 00:15:14.447 "data_offset": 0, 00:15:14.447 "data_size": 65536 00:15:14.447 }, 00:15:14.447 { 00:15:14.447 "name": "BaseBdev3", 00:15:14.447 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:14.447 "is_configured": true, 00:15:14.447 "data_offset": 0, 00:15:14.447 "data_size": 65536 00:15:14.447 } 00:15:14.447 ] 00:15:14.447 }' 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.447 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.705 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.705 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.705 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:14.705 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.705 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.963 [2024-11-04 14:47:44.647907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.963 BaseBdev1 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.963 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.963 [ 00:15:14.963 { 00:15:14.963 "name": "BaseBdev1", 00:15:14.963 "aliases": [ 00:15:14.963 "63356517-1174-47f4-81c9-1ccb2853ae2b" 00:15:14.963 ], 00:15:14.963 "product_name": "Malloc disk", 00:15:14.963 "block_size": 512, 00:15:14.963 "num_blocks": 65536, 00:15:14.963 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:14.963 "assigned_rate_limits": { 00:15:14.963 "rw_ios_per_sec": 0, 00:15:14.963 "rw_mbytes_per_sec": 0, 00:15:14.963 "r_mbytes_per_sec": 0, 00:15:14.963 "w_mbytes_per_sec": 0 00:15:14.963 }, 00:15:14.963 "claimed": true, 00:15:14.963 "claim_type": "exclusive_write", 00:15:14.963 "zoned": false, 00:15:14.963 "supported_io_types": { 00:15:14.963 "read": true, 00:15:14.963 "write": true, 00:15:14.963 "unmap": true, 00:15:14.963 "flush": true, 00:15:14.963 "reset": true, 00:15:14.963 "nvme_admin": false, 00:15:14.963 "nvme_io": false, 00:15:14.963 "nvme_io_md": false, 00:15:14.963 "write_zeroes": true, 00:15:14.963 "zcopy": true, 00:15:14.963 "get_zone_info": false, 00:15:14.963 "zone_management": false, 00:15:14.963 "zone_append": false, 00:15:14.963 "compare": false, 00:15:14.963 "compare_and_write": false, 00:15:14.963 "abort": true, 00:15:14.963 "seek_hole": false, 00:15:14.963 "seek_data": false, 00:15:14.963 "copy": true, 00:15:14.963 "nvme_iov_md": false 00:15:14.963 }, 00:15:14.963 "memory_domains": [ 00:15:14.963 { 00:15:14.963 "dma_device_id": "system", 00:15:14.963 "dma_device_type": 1 00:15:14.963 }, 00:15:14.963 { 00:15:14.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.964 "dma_device_type": 2 00:15:14.964 } 00:15:14.964 ], 00:15:14.964 "driver_specific": {} 00:15:14.964 } 00:15:14.964 ] 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.964 "name": "Existed_Raid", 00:15:14.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.964 "strip_size_kb": 64, 00:15:14.964 "state": "configuring", 00:15:14.964 "raid_level": "raid0", 00:15:14.964 "superblock": false, 00:15:14.964 "num_base_bdevs": 3, 00:15:14.964 "num_base_bdevs_discovered": 2, 00:15:14.964 "num_base_bdevs_operational": 3, 00:15:14.964 "base_bdevs_list": [ 00:15:14.964 { 00:15:14.964 "name": "BaseBdev1", 00:15:14.964 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:14.964 "is_configured": true, 00:15:14.964 "data_offset": 0, 00:15:14.964 "data_size": 65536 00:15:14.964 }, 00:15:14.964 { 00:15:14.964 "name": null, 00:15:14.964 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:14.964 "is_configured": false, 00:15:14.964 "data_offset": 0, 00:15:14.964 "data_size": 65536 00:15:14.964 }, 00:15:14.964 { 00:15:14.964 "name": "BaseBdev3", 00:15:14.964 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:14.964 "is_configured": true, 00:15:14.964 "data_offset": 0, 00:15:14.964 "data_size": 65536 00:15:14.964 } 00:15:14.964 ] 00:15:14.964 }' 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.964 14:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 [2024-11-04 14:47:45.228165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.530 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.531 "name": "Existed_Raid", 00:15:15.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.531 "strip_size_kb": 64, 00:15:15.531 "state": "configuring", 00:15:15.531 "raid_level": "raid0", 00:15:15.531 "superblock": false, 00:15:15.531 "num_base_bdevs": 3, 00:15:15.531 "num_base_bdevs_discovered": 1, 00:15:15.531 "num_base_bdevs_operational": 3, 00:15:15.531 "base_bdevs_list": [ 00:15:15.531 { 00:15:15.531 "name": "BaseBdev1", 00:15:15.531 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:15.531 "is_configured": true, 00:15:15.531 "data_offset": 0, 00:15:15.531 "data_size": 65536 00:15:15.531 }, 00:15:15.531 { 00:15:15.531 "name": null, 00:15:15.531 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:15.531 "is_configured": false, 00:15:15.531 "data_offset": 0, 00:15:15.531 "data_size": 65536 00:15:15.531 }, 00:15:15.531 { 00:15:15.531 "name": null, 00:15:15.531 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:15.531 "is_configured": false, 00:15:15.531 "data_offset": 0, 00:15:15.531 "data_size": 65536 00:15:15.531 } 00:15:15.531 ] 00:15:15.531 }' 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.531 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.111 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.111 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.111 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.111 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.111 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.112 [2024-11-04 14:47:45.772371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.112 "name": "Existed_Raid", 00:15:16.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.112 "strip_size_kb": 64, 00:15:16.112 "state": "configuring", 00:15:16.112 "raid_level": "raid0", 00:15:16.112 "superblock": false, 00:15:16.112 "num_base_bdevs": 3, 00:15:16.112 "num_base_bdevs_discovered": 2, 00:15:16.112 "num_base_bdevs_operational": 3, 00:15:16.112 "base_bdevs_list": [ 00:15:16.112 { 00:15:16.112 "name": "BaseBdev1", 00:15:16.112 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:16.112 "is_configured": true, 00:15:16.112 "data_offset": 0, 00:15:16.112 "data_size": 65536 00:15:16.112 }, 00:15:16.112 { 00:15:16.112 "name": null, 00:15:16.112 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:16.112 "is_configured": false, 00:15:16.112 "data_offset": 0, 00:15:16.112 "data_size": 65536 00:15:16.112 }, 00:15:16.112 { 00:15:16.112 "name": "BaseBdev3", 00:15:16.112 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:16.112 "is_configured": true, 00:15:16.112 "data_offset": 0, 00:15:16.112 "data_size": 65536 00:15:16.112 } 00:15:16.112 ] 00:15:16.112 }' 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.112 14:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.690 [2024-11-04 14:47:46.352574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.690 "name": "Existed_Raid", 00:15:16.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.690 "strip_size_kb": 64, 00:15:16.690 "state": "configuring", 00:15:16.690 "raid_level": "raid0", 00:15:16.690 "superblock": false, 00:15:16.690 "num_base_bdevs": 3, 00:15:16.690 "num_base_bdevs_discovered": 1, 00:15:16.690 "num_base_bdevs_operational": 3, 00:15:16.690 "base_bdevs_list": [ 00:15:16.690 { 00:15:16.690 "name": null, 00:15:16.690 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:16.690 "is_configured": false, 00:15:16.690 "data_offset": 0, 00:15:16.690 "data_size": 65536 00:15:16.690 }, 00:15:16.690 { 00:15:16.690 "name": null, 00:15:16.690 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:16.690 "is_configured": false, 00:15:16.690 "data_offset": 0, 00:15:16.690 "data_size": 65536 00:15:16.690 }, 00:15:16.690 { 00:15:16.690 "name": "BaseBdev3", 00:15:16.690 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:16.690 "is_configured": true, 00:15:16.690 "data_offset": 0, 00:15:16.690 "data_size": 65536 00:15:16.690 } 00:15:16.690 ] 00:15:16.690 }' 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.690 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.257 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.257 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.257 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.257 14:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:17.257 14:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.257 [2024-11-04 14:47:47.023650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.257 "name": "Existed_Raid", 00:15:17.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.257 "strip_size_kb": 64, 00:15:17.257 "state": "configuring", 00:15:17.257 "raid_level": "raid0", 00:15:17.257 "superblock": false, 00:15:17.257 "num_base_bdevs": 3, 00:15:17.257 "num_base_bdevs_discovered": 2, 00:15:17.257 "num_base_bdevs_operational": 3, 00:15:17.257 "base_bdevs_list": [ 00:15:17.257 { 00:15:17.257 "name": null, 00:15:17.257 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:17.257 "is_configured": false, 00:15:17.257 "data_offset": 0, 00:15:17.257 "data_size": 65536 00:15:17.257 }, 00:15:17.257 { 00:15:17.257 "name": "BaseBdev2", 00:15:17.257 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:17.257 "is_configured": true, 00:15:17.257 "data_offset": 0, 00:15:17.257 "data_size": 65536 00:15:17.257 }, 00:15:17.257 { 00:15:17.257 "name": "BaseBdev3", 00:15:17.257 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:17.257 "is_configured": true, 00:15:17.257 "data_offset": 0, 00:15:17.257 "data_size": 65536 00:15:17.257 } 00:15:17.257 ] 00:15:17.257 }' 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.257 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 63356517-1174-47f4-81c9-1ccb2853ae2b 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 [2024-11-04 14:47:47.666005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:17.824 [2024-11-04 14:47:47.666095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:17.824 [2024-11-04 14:47:47.666112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:17.824 [2024-11-04 14:47:47.666517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:17.824 [2024-11-04 14:47:47.666731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:17.824 [2024-11-04 14:47:47.666756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:17.824 [2024-11-04 14:47:47.667121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.824 NewBaseBdev 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 [ 00:15:17.824 { 00:15:17.824 "name": "NewBaseBdev", 00:15:17.824 "aliases": [ 00:15:17.824 "63356517-1174-47f4-81c9-1ccb2853ae2b" 00:15:17.824 ], 00:15:17.824 "product_name": "Malloc disk", 00:15:17.824 "block_size": 512, 00:15:17.824 "num_blocks": 65536, 00:15:17.824 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:17.824 "assigned_rate_limits": { 00:15:17.824 "rw_ios_per_sec": 0, 00:15:17.824 "rw_mbytes_per_sec": 0, 00:15:17.824 "r_mbytes_per_sec": 0, 00:15:17.824 "w_mbytes_per_sec": 0 00:15:17.824 }, 00:15:17.824 "claimed": true, 00:15:17.824 "claim_type": "exclusive_write", 00:15:17.824 "zoned": false, 00:15:17.824 "supported_io_types": { 00:15:17.824 "read": true, 00:15:17.824 "write": true, 00:15:17.824 "unmap": true, 00:15:17.824 "flush": true, 00:15:17.824 "reset": true, 00:15:17.824 "nvme_admin": false, 00:15:17.824 "nvme_io": false, 00:15:17.824 "nvme_io_md": false, 00:15:17.824 "write_zeroes": true, 00:15:17.824 "zcopy": true, 00:15:17.824 "get_zone_info": false, 00:15:17.824 "zone_management": false, 00:15:17.824 "zone_append": false, 00:15:17.824 "compare": false, 00:15:17.824 "compare_and_write": false, 00:15:17.824 "abort": true, 00:15:17.824 "seek_hole": false, 00:15:17.824 "seek_data": false, 00:15:17.824 "copy": true, 00:15:17.824 "nvme_iov_md": false 00:15:17.824 }, 00:15:17.824 "memory_domains": [ 00:15:17.824 { 00:15:17.824 "dma_device_id": "system", 00:15:17.824 "dma_device_type": 1 00:15:17.824 }, 00:15:17.824 { 00:15:17.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.824 "dma_device_type": 2 00:15:17.824 } 00:15:17.824 ], 00:15:17.824 "driver_specific": {} 00:15:17.824 } 00:15:17.824 ] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.824 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.825 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.083 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.083 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.083 "name": "Existed_Raid", 00:15:18.083 "uuid": "fc0c8b89-4789-4fb7-8e4a-c3547e289e89", 00:15:18.083 "strip_size_kb": 64, 00:15:18.083 "state": "online", 00:15:18.083 "raid_level": "raid0", 00:15:18.083 "superblock": false, 00:15:18.083 "num_base_bdevs": 3, 00:15:18.083 "num_base_bdevs_discovered": 3, 00:15:18.083 "num_base_bdevs_operational": 3, 00:15:18.083 "base_bdevs_list": [ 00:15:18.083 { 00:15:18.083 "name": "NewBaseBdev", 00:15:18.083 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:18.083 "is_configured": true, 00:15:18.083 "data_offset": 0, 00:15:18.083 "data_size": 65536 00:15:18.083 }, 00:15:18.083 { 00:15:18.083 "name": "BaseBdev2", 00:15:18.083 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:18.083 "is_configured": true, 00:15:18.083 "data_offset": 0, 00:15:18.083 "data_size": 65536 00:15:18.083 }, 00:15:18.083 { 00:15:18.083 "name": "BaseBdev3", 00:15:18.083 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:18.083 "is_configured": true, 00:15:18.083 "data_offset": 0, 00:15:18.083 "data_size": 65536 00:15:18.083 } 00:15:18.083 ] 00:15:18.083 }' 00:15:18.083 14:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.083 14:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:18.341 [2024-11-04 14:47:48.202617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.341 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:18.598 "name": "Existed_Raid", 00:15:18.598 "aliases": [ 00:15:18.598 "fc0c8b89-4789-4fb7-8e4a-c3547e289e89" 00:15:18.598 ], 00:15:18.598 "product_name": "Raid Volume", 00:15:18.598 "block_size": 512, 00:15:18.598 "num_blocks": 196608, 00:15:18.598 "uuid": "fc0c8b89-4789-4fb7-8e4a-c3547e289e89", 00:15:18.598 "assigned_rate_limits": { 00:15:18.598 "rw_ios_per_sec": 0, 00:15:18.598 "rw_mbytes_per_sec": 0, 00:15:18.598 "r_mbytes_per_sec": 0, 00:15:18.598 "w_mbytes_per_sec": 0 00:15:18.598 }, 00:15:18.598 "claimed": false, 00:15:18.598 "zoned": false, 00:15:18.598 "supported_io_types": { 00:15:18.598 "read": true, 00:15:18.598 "write": true, 00:15:18.598 "unmap": true, 00:15:18.598 "flush": true, 00:15:18.598 "reset": true, 00:15:18.598 "nvme_admin": false, 00:15:18.598 "nvme_io": false, 00:15:18.598 "nvme_io_md": false, 00:15:18.598 "write_zeroes": true, 00:15:18.598 "zcopy": false, 00:15:18.598 "get_zone_info": false, 00:15:18.598 "zone_management": false, 00:15:18.598 "zone_append": false, 00:15:18.598 "compare": false, 00:15:18.598 "compare_and_write": false, 00:15:18.598 "abort": false, 00:15:18.598 "seek_hole": false, 00:15:18.598 "seek_data": false, 00:15:18.598 "copy": false, 00:15:18.598 "nvme_iov_md": false 00:15:18.598 }, 00:15:18.598 "memory_domains": [ 00:15:18.598 { 00:15:18.598 "dma_device_id": "system", 00:15:18.598 "dma_device_type": 1 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.598 "dma_device_type": 2 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "dma_device_id": "system", 00:15:18.598 "dma_device_type": 1 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.598 "dma_device_type": 2 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "dma_device_id": "system", 00:15:18.598 "dma_device_type": 1 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.598 "dma_device_type": 2 00:15:18.598 } 00:15:18.598 ], 00:15:18.598 "driver_specific": { 00:15:18.598 "raid": { 00:15:18.598 "uuid": "fc0c8b89-4789-4fb7-8e4a-c3547e289e89", 00:15:18.598 "strip_size_kb": 64, 00:15:18.598 "state": "online", 00:15:18.598 "raid_level": "raid0", 00:15:18.598 "superblock": false, 00:15:18.598 "num_base_bdevs": 3, 00:15:18.598 "num_base_bdevs_discovered": 3, 00:15:18.598 "num_base_bdevs_operational": 3, 00:15:18.598 "base_bdevs_list": [ 00:15:18.598 { 00:15:18.598 "name": "NewBaseBdev", 00:15:18.598 "uuid": "63356517-1174-47f4-81c9-1ccb2853ae2b", 00:15:18.598 "is_configured": true, 00:15:18.598 "data_offset": 0, 00:15:18.598 "data_size": 65536 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "name": "BaseBdev2", 00:15:18.598 "uuid": "555759dd-8016-407b-8a4f-e3367a5d0dc0", 00:15:18.598 "is_configured": true, 00:15:18.598 "data_offset": 0, 00:15:18.598 "data_size": 65536 00:15:18.598 }, 00:15:18.598 { 00:15:18.598 "name": "BaseBdev3", 00:15:18.598 "uuid": "78b256bc-8512-4c88-b718-b8d25d615d43", 00:15:18.598 "is_configured": true, 00:15:18.598 "data_offset": 0, 00:15:18.598 "data_size": 65536 00:15:18.598 } 00:15:18.598 ] 00:15:18.598 } 00:15:18.598 } 00:15:18.598 }' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:18.598 BaseBdev2 00:15:18.598 BaseBdev3' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.598 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.856 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.856 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.857 [2024-11-04 14:47:48.518353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.857 [2024-11-04 14:47:48.518413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.857 [2024-11-04 14:47:48.518547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.857 [2024-11-04 14:47:48.518631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.857 [2024-11-04 14:47:48.518653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63905 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63905 ']' 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63905 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63905 00:15:18.857 killing process with pid 63905 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63905' 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63905 00:15:18.857 14:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63905 00:15:18.857 [2024-11-04 14:47:48.556980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.115 [2024-11-04 14:47:48.856681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.488 14:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:20.488 00:15:20.488 real 0m12.074s 00:15:20.488 user 0m19.809s 00:15:20.488 sys 0m1.738s 00:15:20.488 ************************************ 00:15:20.488 END TEST raid_state_function_test 00:15:20.488 ************************************ 00:15:20.488 14:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:20.488 14:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.488 14:47:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:20.488 14:47:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:20.488 14:47:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:20.488 14:47:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.488 ************************************ 00:15:20.488 START TEST raid_state_function_test_sb 00:15:20.488 ************************************ 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:20.488 Process raid pid: 64543 00:15:20.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64543 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64543' 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64543 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64543 ']' 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:20.488 14:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.488 [2024-11-04 14:47:50.202496] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:20.489 [2024-11-04 14:47:50.203020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.746 [2024-11-04 14:47:50.393400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.746 [2024-11-04 14:47:50.566935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.003 [2024-11-04 14:47:50.798040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.003 [2024-11-04 14:47:50.798120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.569 [2024-11-04 14:47:51.246761] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.569 [2024-11-04 14:47:51.246856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.569 [2024-11-04 14:47:51.246875] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.569 [2024-11-04 14:47:51.246893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.569 [2024-11-04 14:47:51.246903] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.569 [2024-11-04 14:47:51.246918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.569 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.569 "name": "Existed_Raid", 00:15:21.569 "uuid": "4eb04a65-cb7a-4d80-88fb-cdb7e94e6781", 00:15:21.569 "strip_size_kb": 64, 00:15:21.569 "state": "configuring", 00:15:21.569 "raid_level": "raid0", 00:15:21.569 "superblock": true, 00:15:21.569 "num_base_bdevs": 3, 00:15:21.569 "num_base_bdevs_discovered": 0, 00:15:21.569 "num_base_bdevs_operational": 3, 00:15:21.569 "base_bdevs_list": [ 00:15:21.569 { 00:15:21.569 "name": "BaseBdev1", 00:15:21.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.569 "is_configured": false, 00:15:21.569 "data_offset": 0, 00:15:21.569 "data_size": 0 00:15:21.569 }, 00:15:21.569 { 00:15:21.569 "name": "BaseBdev2", 00:15:21.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.569 "is_configured": false, 00:15:21.569 "data_offset": 0, 00:15:21.569 "data_size": 0 00:15:21.570 }, 00:15:21.570 { 00:15:21.570 "name": "BaseBdev3", 00:15:21.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.570 "is_configured": false, 00:15:21.570 "data_offset": 0, 00:15:21.570 "data_size": 0 00:15:21.570 } 00:15:21.570 ] 00:15:21.570 }' 00:15:21.570 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.570 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.148 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.148 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.148 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.148 [2024-11-04 14:47:51.754801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.149 [2024-11-04 14:47:51.754865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.149 [2024-11-04 14:47:51.762775] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.149 [2024-11-04 14:47:51.762840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.149 [2024-11-04 14:47:51.762856] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.149 [2024-11-04 14:47:51.762873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.149 [2024-11-04 14:47:51.762883] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.149 [2024-11-04 14:47:51.762898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.149 [2024-11-04 14:47:51.811832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.149 BaseBdev1 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.149 [ 00:15:22.149 { 00:15:22.149 "name": "BaseBdev1", 00:15:22.149 "aliases": [ 00:15:22.149 "70e1864c-a90d-4735-a62d-fe7a2f65b9e9" 00:15:22.149 ], 00:15:22.149 "product_name": "Malloc disk", 00:15:22.149 "block_size": 512, 00:15:22.149 "num_blocks": 65536, 00:15:22.149 "uuid": "70e1864c-a90d-4735-a62d-fe7a2f65b9e9", 00:15:22.149 "assigned_rate_limits": { 00:15:22.149 "rw_ios_per_sec": 0, 00:15:22.149 "rw_mbytes_per_sec": 0, 00:15:22.149 "r_mbytes_per_sec": 0, 00:15:22.149 "w_mbytes_per_sec": 0 00:15:22.149 }, 00:15:22.149 "claimed": true, 00:15:22.149 "claim_type": "exclusive_write", 00:15:22.149 "zoned": false, 00:15:22.149 "supported_io_types": { 00:15:22.149 "read": true, 00:15:22.149 "write": true, 00:15:22.149 "unmap": true, 00:15:22.149 "flush": true, 00:15:22.149 "reset": true, 00:15:22.149 "nvme_admin": false, 00:15:22.149 "nvme_io": false, 00:15:22.149 "nvme_io_md": false, 00:15:22.149 "write_zeroes": true, 00:15:22.149 "zcopy": true, 00:15:22.149 "get_zone_info": false, 00:15:22.149 "zone_management": false, 00:15:22.149 "zone_append": false, 00:15:22.149 "compare": false, 00:15:22.149 "compare_and_write": false, 00:15:22.149 "abort": true, 00:15:22.149 "seek_hole": false, 00:15:22.149 "seek_data": false, 00:15:22.149 "copy": true, 00:15:22.149 "nvme_iov_md": false 00:15:22.149 }, 00:15:22.149 "memory_domains": [ 00:15:22.149 { 00:15:22.149 "dma_device_id": "system", 00:15:22.149 "dma_device_type": 1 00:15:22.149 }, 00:15:22.149 { 00:15:22.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.149 "dma_device_type": 2 00:15:22.149 } 00:15:22.149 ], 00:15:22.149 "driver_specific": {} 00:15:22.149 } 00:15:22.149 ] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.149 "name": "Existed_Raid", 00:15:22.149 "uuid": "2bfc5977-bc3a-4fdb-a1b7-adbc7308c81f", 00:15:22.149 "strip_size_kb": 64, 00:15:22.149 "state": "configuring", 00:15:22.149 "raid_level": "raid0", 00:15:22.149 "superblock": true, 00:15:22.149 "num_base_bdevs": 3, 00:15:22.149 "num_base_bdevs_discovered": 1, 00:15:22.149 "num_base_bdevs_operational": 3, 00:15:22.149 "base_bdevs_list": [ 00:15:22.149 { 00:15:22.149 "name": "BaseBdev1", 00:15:22.149 "uuid": "70e1864c-a90d-4735-a62d-fe7a2f65b9e9", 00:15:22.149 "is_configured": true, 00:15:22.149 "data_offset": 2048, 00:15:22.149 "data_size": 63488 00:15:22.149 }, 00:15:22.149 { 00:15:22.149 "name": "BaseBdev2", 00:15:22.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.149 "is_configured": false, 00:15:22.149 "data_offset": 0, 00:15:22.149 "data_size": 0 00:15:22.149 }, 00:15:22.149 { 00:15:22.149 "name": "BaseBdev3", 00:15:22.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.149 "is_configured": false, 00:15:22.149 "data_offset": 0, 00:15:22.149 "data_size": 0 00:15:22.149 } 00:15:22.149 ] 00:15:22.149 }' 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.149 14:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.727 [2024-11-04 14:47:52.344057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.727 [2024-11-04 14:47:52.344443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.727 [2024-11-04 14:47:52.352112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.727 [2024-11-04 14:47:52.354997] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.727 [2024-11-04 14:47:52.355056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.727 [2024-11-04 14:47:52.355074] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.727 [2024-11-04 14:47:52.355091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.727 "name": "Existed_Raid", 00:15:22.727 "uuid": "67cbbc2e-9d6d-4aed-9741-12b7d70867c8", 00:15:22.727 "strip_size_kb": 64, 00:15:22.727 "state": "configuring", 00:15:22.727 "raid_level": "raid0", 00:15:22.727 "superblock": true, 00:15:22.727 "num_base_bdevs": 3, 00:15:22.727 "num_base_bdevs_discovered": 1, 00:15:22.727 "num_base_bdevs_operational": 3, 00:15:22.727 "base_bdevs_list": [ 00:15:22.727 { 00:15:22.727 "name": "BaseBdev1", 00:15:22.727 "uuid": "70e1864c-a90d-4735-a62d-fe7a2f65b9e9", 00:15:22.727 "is_configured": true, 00:15:22.727 "data_offset": 2048, 00:15:22.727 "data_size": 63488 00:15:22.727 }, 00:15:22.727 { 00:15:22.727 "name": "BaseBdev2", 00:15:22.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.727 "is_configured": false, 00:15:22.727 "data_offset": 0, 00:15:22.727 "data_size": 0 00:15:22.727 }, 00:15:22.727 { 00:15:22.727 "name": "BaseBdev3", 00:15:22.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.727 "is_configured": false, 00:15:22.727 "data_offset": 0, 00:15:22.727 "data_size": 0 00:15:22.727 } 00:15:22.727 ] 00:15:22.727 }' 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.727 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.986 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.986 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.986 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.244 [2024-11-04 14:47:52.921642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.244 BaseBdev2 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.244 [ 00:15:23.244 { 00:15:23.244 "name": "BaseBdev2", 00:15:23.244 "aliases": [ 00:15:23.244 "b724cfe2-4357-4ab7-baff-1a9f9715b577" 00:15:23.244 ], 00:15:23.244 "product_name": "Malloc disk", 00:15:23.244 "block_size": 512, 00:15:23.244 "num_blocks": 65536, 00:15:23.244 "uuid": "b724cfe2-4357-4ab7-baff-1a9f9715b577", 00:15:23.244 "assigned_rate_limits": { 00:15:23.244 "rw_ios_per_sec": 0, 00:15:23.244 "rw_mbytes_per_sec": 0, 00:15:23.244 "r_mbytes_per_sec": 0, 00:15:23.244 "w_mbytes_per_sec": 0 00:15:23.244 }, 00:15:23.244 "claimed": true, 00:15:23.244 "claim_type": "exclusive_write", 00:15:23.244 "zoned": false, 00:15:23.244 "supported_io_types": { 00:15:23.244 "read": true, 00:15:23.244 "write": true, 00:15:23.244 "unmap": true, 00:15:23.244 "flush": true, 00:15:23.244 "reset": true, 00:15:23.244 "nvme_admin": false, 00:15:23.244 "nvme_io": false, 00:15:23.244 "nvme_io_md": false, 00:15:23.244 "write_zeroes": true, 00:15:23.244 "zcopy": true, 00:15:23.244 "get_zone_info": false, 00:15:23.244 "zone_management": false, 00:15:23.244 "zone_append": false, 00:15:23.244 "compare": false, 00:15:23.244 "compare_and_write": false, 00:15:23.244 "abort": true, 00:15:23.244 "seek_hole": false, 00:15:23.244 "seek_data": false, 00:15:23.244 "copy": true, 00:15:23.244 "nvme_iov_md": false 00:15:23.244 }, 00:15:23.244 "memory_domains": [ 00:15:23.244 { 00:15:23.244 "dma_device_id": "system", 00:15:23.244 "dma_device_type": 1 00:15:23.244 }, 00:15:23.244 { 00:15:23.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.244 "dma_device_type": 2 00:15:23.244 } 00:15:23.244 ], 00:15:23.244 "driver_specific": {} 00:15:23.244 } 00:15:23.244 ] 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.244 14:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.244 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.244 "name": "Existed_Raid", 00:15:23.244 "uuid": "67cbbc2e-9d6d-4aed-9741-12b7d70867c8", 00:15:23.244 "strip_size_kb": 64, 00:15:23.244 "state": "configuring", 00:15:23.244 "raid_level": "raid0", 00:15:23.244 "superblock": true, 00:15:23.244 "num_base_bdevs": 3, 00:15:23.244 "num_base_bdevs_discovered": 2, 00:15:23.244 "num_base_bdevs_operational": 3, 00:15:23.244 "base_bdevs_list": [ 00:15:23.244 { 00:15:23.244 "name": "BaseBdev1", 00:15:23.244 "uuid": "70e1864c-a90d-4735-a62d-fe7a2f65b9e9", 00:15:23.244 "is_configured": true, 00:15:23.244 "data_offset": 2048, 00:15:23.244 "data_size": 63488 00:15:23.244 }, 00:15:23.244 { 00:15:23.244 "name": "BaseBdev2", 00:15:23.244 "uuid": "b724cfe2-4357-4ab7-baff-1a9f9715b577", 00:15:23.244 "is_configured": true, 00:15:23.244 "data_offset": 2048, 00:15:23.244 "data_size": 63488 00:15:23.244 }, 00:15:23.244 { 00:15:23.244 "name": "BaseBdev3", 00:15:23.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.244 "is_configured": false, 00:15:23.244 "data_offset": 0, 00:15:23.244 "data_size": 0 00:15:23.244 } 00:15:23.244 ] 00:15:23.244 }' 00:15:23.244 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.244 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.810 [2024-11-04 14:47:53.549739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.810 [2024-11-04 14:47:53.550113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:23.810 [2024-11-04 14:47:53.550149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:23.810 BaseBdev3 00:15:23.810 [2024-11-04 14:47:53.550548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:23.810 [2024-11-04 14:47:53.550752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:23.810 [2024-11-04 14:47:53.550777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:23.810 [2024-11-04 14:47:53.550973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.810 [ 00:15:23.810 { 00:15:23.810 "name": "BaseBdev3", 00:15:23.810 "aliases": [ 00:15:23.810 "1948fcc6-9813-4216-80b3-26d0515743d5" 00:15:23.810 ], 00:15:23.810 "product_name": "Malloc disk", 00:15:23.810 "block_size": 512, 00:15:23.810 "num_blocks": 65536, 00:15:23.810 "uuid": "1948fcc6-9813-4216-80b3-26d0515743d5", 00:15:23.810 "assigned_rate_limits": { 00:15:23.810 "rw_ios_per_sec": 0, 00:15:23.810 "rw_mbytes_per_sec": 0, 00:15:23.810 "r_mbytes_per_sec": 0, 00:15:23.810 "w_mbytes_per_sec": 0 00:15:23.810 }, 00:15:23.810 "claimed": true, 00:15:23.810 "claim_type": "exclusive_write", 00:15:23.810 "zoned": false, 00:15:23.810 "supported_io_types": { 00:15:23.810 "read": true, 00:15:23.810 "write": true, 00:15:23.810 "unmap": true, 00:15:23.810 "flush": true, 00:15:23.810 "reset": true, 00:15:23.810 "nvme_admin": false, 00:15:23.810 "nvme_io": false, 00:15:23.810 "nvme_io_md": false, 00:15:23.810 "write_zeroes": true, 00:15:23.810 "zcopy": true, 00:15:23.810 "get_zone_info": false, 00:15:23.810 "zone_management": false, 00:15:23.810 "zone_append": false, 00:15:23.810 "compare": false, 00:15:23.810 "compare_and_write": false, 00:15:23.810 "abort": true, 00:15:23.810 "seek_hole": false, 00:15:23.810 "seek_data": false, 00:15:23.810 "copy": true, 00:15:23.810 "nvme_iov_md": false 00:15:23.810 }, 00:15:23.810 "memory_domains": [ 00:15:23.810 { 00:15:23.810 "dma_device_id": "system", 00:15:23.810 "dma_device_type": 1 00:15:23.810 }, 00:15:23.810 { 00:15:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.810 "dma_device_type": 2 00:15:23.810 } 00:15:23.810 ], 00:15:23.810 "driver_specific": {} 00:15:23.810 } 00:15:23.810 ] 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.810 "name": "Existed_Raid", 00:15:23.810 "uuid": "67cbbc2e-9d6d-4aed-9741-12b7d70867c8", 00:15:23.810 "strip_size_kb": 64, 00:15:23.810 "state": "online", 00:15:23.810 "raid_level": "raid0", 00:15:23.810 "superblock": true, 00:15:23.810 "num_base_bdevs": 3, 00:15:23.810 "num_base_bdevs_discovered": 3, 00:15:23.810 "num_base_bdevs_operational": 3, 00:15:23.810 "base_bdevs_list": [ 00:15:23.810 { 00:15:23.810 "name": "BaseBdev1", 00:15:23.810 "uuid": "70e1864c-a90d-4735-a62d-fe7a2f65b9e9", 00:15:23.810 "is_configured": true, 00:15:23.810 "data_offset": 2048, 00:15:23.810 "data_size": 63488 00:15:23.810 }, 00:15:23.810 { 00:15:23.810 "name": "BaseBdev2", 00:15:23.810 "uuid": "b724cfe2-4357-4ab7-baff-1a9f9715b577", 00:15:23.810 "is_configured": true, 00:15:23.810 "data_offset": 2048, 00:15:23.810 "data_size": 63488 00:15:23.810 }, 00:15:23.810 { 00:15:23.810 "name": "BaseBdev3", 00:15:23.810 "uuid": "1948fcc6-9813-4216-80b3-26d0515743d5", 00:15:23.810 "is_configured": true, 00:15:23.810 "data_offset": 2048, 00:15:23.810 "data_size": 63488 00:15:23.810 } 00:15:23.810 ] 00:15:23.810 }' 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.810 14:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:24.376 [2024-11-04 14:47:54.098434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:24.376 "name": "Existed_Raid", 00:15:24.376 "aliases": [ 00:15:24.376 "67cbbc2e-9d6d-4aed-9741-12b7d70867c8" 00:15:24.376 ], 00:15:24.376 "product_name": "Raid Volume", 00:15:24.376 "block_size": 512, 00:15:24.376 "num_blocks": 190464, 00:15:24.376 "uuid": "67cbbc2e-9d6d-4aed-9741-12b7d70867c8", 00:15:24.376 "assigned_rate_limits": { 00:15:24.376 "rw_ios_per_sec": 0, 00:15:24.376 "rw_mbytes_per_sec": 0, 00:15:24.376 "r_mbytes_per_sec": 0, 00:15:24.376 "w_mbytes_per_sec": 0 00:15:24.376 }, 00:15:24.376 "claimed": false, 00:15:24.376 "zoned": false, 00:15:24.376 "supported_io_types": { 00:15:24.376 "read": true, 00:15:24.376 "write": true, 00:15:24.376 "unmap": true, 00:15:24.376 "flush": true, 00:15:24.376 "reset": true, 00:15:24.376 "nvme_admin": false, 00:15:24.376 "nvme_io": false, 00:15:24.376 "nvme_io_md": false, 00:15:24.376 "write_zeroes": true, 00:15:24.376 "zcopy": false, 00:15:24.376 "get_zone_info": false, 00:15:24.376 "zone_management": false, 00:15:24.376 "zone_append": false, 00:15:24.376 "compare": false, 00:15:24.376 "compare_and_write": false, 00:15:24.376 "abort": false, 00:15:24.376 "seek_hole": false, 00:15:24.376 "seek_data": false, 00:15:24.376 "copy": false, 00:15:24.376 "nvme_iov_md": false 00:15:24.376 }, 00:15:24.376 "memory_domains": [ 00:15:24.376 { 00:15:24.376 "dma_device_id": "system", 00:15:24.376 "dma_device_type": 1 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.376 "dma_device_type": 2 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "dma_device_id": "system", 00:15:24.376 "dma_device_type": 1 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.376 "dma_device_type": 2 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "dma_device_id": "system", 00:15:24.376 "dma_device_type": 1 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.376 "dma_device_type": 2 00:15:24.376 } 00:15:24.376 ], 00:15:24.376 "driver_specific": { 00:15:24.376 "raid": { 00:15:24.376 "uuid": "67cbbc2e-9d6d-4aed-9741-12b7d70867c8", 00:15:24.376 "strip_size_kb": 64, 00:15:24.376 "state": "online", 00:15:24.376 "raid_level": "raid0", 00:15:24.376 "superblock": true, 00:15:24.376 "num_base_bdevs": 3, 00:15:24.376 "num_base_bdevs_discovered": 3, 00:15:24.376 "num_base_bdevs_operational": 3, 00:15:24.376 "base_bdevs_list": [ 00:15:24.376 { 00:15:24.376 "name": "BaseBdev1", 00:15:24.376 "uuid": "70e1864c-a90d-4735-a62d-fe7a2f65b9e9", 00:15:24.376 "is_configured": true, 00:15:24.376 "data_offset": 2048, 00:15:24.376 "data_size": 63488 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "name": "BaseBdev2", 00:15:24.376 "uuid": "b724cfe2-4357-4ab7-baff-1a9f9715b577", 00:15:24.376 "is_configured": true, 00:15:24.376 "data_offset": 2048, 00:15:24.376 "data_size": 63488 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "name": "BaseBdev3", 00:15:24.376 "uuid": "1948fcc6-9813-4216-80b3-26d0515743d5", 00:15:24.376 "is_configured": true, 00:15:24.376 "data_offset": 2048, 00:15:24.376 "data_size": 63488 00:15:24.376 } 00:15:24.376 ] 00:15:24.376 } 00:15:24.376 } 00:15:24.376 }' 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:24.376 BaseBdev2 00:15:24.376 BaseBdev3' 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.376 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.634 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.635 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 [2024-11-04 14:47:54.434171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.635 [2024-11-04 14:47:54.434243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.635 [2024-11-04 14:47:54.434329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.924 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.924 "name": "Existed_Raid", 00:15:24.924 "uuid": "67cbbc2e-9d6d-4aed-9741-12b7d70867c8", 00:15:24.924 "strip_size_kb": 64, 00:15:24.924 "state": "offline", 00:15:24.924 "raid_level": "raid0", 00:15:24.924 "superblock": true, 00:15:24.924 "num_base_bdevs": 3, 00:15:24.924 "num_base_bdevs_discovered": 2, 00:15:24.924 "num_base_bdevs_operational": 2, 00:15:24.924 "base_bdevs_list": [ 00:15:24.924 { 00:15:24.924 "name": null, 00:15:24.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.924 "is_configured": false, 00:15:24.924 "data_offset": 0, 00:15:24.924 "data_size": 63488 00:15:24.924 }, 00:15:24.924 { 00:15:24.924 "name": "BaseBdev2", 00:15:24.925 "uuid": "b724cfe2-4357-4ab7-baff-1a9f9715b577", 00:15:24.925 "is_configured": true, 00:15:24.925 "data_offset": 2048, 00:15:24.925 "data_size": 63488 00:15:24.925 }, 00:15:24.925 { 00:15:24.925 "name": "BaseBdev3", 00:15:24.925 "uuid": "1948fcc6-9813-4216-80b3-26d0515743d5", 00:15:24.925 "is_configured": true, 00:15:24.925 "data_offset": 2048, 00:15:24.925 "data_size": 63488 00:15:24.925 } 00:15:24.925 ] 00:15:24.925 }' 00:15:24.925 14:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.925 14:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.456 [2024-11-04 14:47:55.096055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.456 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.456 [2024-11-04 14:47:55.251271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:25.456 [2024-11-04 14:47:55.251388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.715 BaseBdev2 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.715 [ 00:15:25.715 { 00:15:25.715 "name": "BaseBdev2", 00:15:25.715 "aliases": [ 00:15:25.715 "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13" 00:15:25.715 ], 00:15:25.715 "product_name": "Malloc disk", 00:15:25.715 "block_size": 512, 00:15:25.715 "num_blocks": 65536, 00:15:25.715 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:25.715 "assigned_rate_limits": { 00:15:25.715 "rw_ios_per_sec": 0, 00:15:25.715 "rw_mbytes_per_sec": 0, 00:15:25.715 "r_mbytes_per_sec": 0, 00:15:25.715 "w_mbytes_per_sec": 0 00:15:25.715 }, 00:15:25.715 "claimed": false, 00:15:25.715 "zoned": false, 00:15:25.715 "supported_io_types": { 00:15:25.715 "read": true, 00:15:25.715 "write": true, 00:15:25.715 "unmap": true, 00:15:25.715 "flush": true, 00:15:25.715 "reset": true, 00:15:25.715 "nvme_admin": false, 00:15:25.715 "nvme_io": false, 00:15:25.715 "nvme_io_md": false, 00:15:25.715 "write_zeroes": true, 00:15:25.715 "zcopy": true, 00:15:25.715 "get_zone_info": false, 00:15:25.715 "zone_management": false, 00:15:25.715 "zone_append": false, 00:15:25.715 "compare": false, 00:15:25.715 "compare_and_write": false, 00:15:25.715 "abort": true, 00:15:25.715 "seek_hole": false, 00:15:25.715 "seek_data": false, 00:15:25.715 "copy": true, 00:15:25.715 "nvme_iov_md": false 00:15:25.715 }, 00:15:25.715 "memory_domains": [ 00:15:25.715 { 00:15:25.715 "dma_device_id": "system", 00:15:25.715 "dma_device_type": 1 00:15:25.715 }, 00:15:25.715 { 00:15:25.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.715 "dma_device_type": 2 00:15:25.715 } 00:15:25.715 ], 00:15:25.715 "driver_specific": {} 00:15:25.715 } 00:15:25.715 ] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.715 BaseBdev3 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.715 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.716 [ 00:15:25.716 { 00:15:25.716 "name": "BaseBdev3", 00:15:25.716 "aliases": [ 00:15:25.716 "e590756b-6589-4857-9508-6afe7b0ed400" 00:15:25.716 ], 00:15:25.716 "product_name": "Malloc disk", 00:15:25.716 "block_size": 512, 00:15:25.716 "num_blocks": 65536, 00:15:25.716 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:25.716 "assigned_rate_limits": { 00:15:25.716 "rw_ios_per_sec": 0, 00:15:25.716 "rw_mbytes_per_sec": 0, 00:15:25.716 "r_mbytes_per_sec": 0, 00:15:25.716 "w_mbytes_per_sec": 0 00:15:25.716 }, 00:15:25.716 "claimed": false, 00:15:25.716 "zoned": false, 00:15:25.716 "supported_io_types": { 00:15:25.716 "read": true, 00:15:25.716 "write": true, 00:15:25.716 "unmap": true, 00:15:25.716 "flush": true, 00:15:25.716 "reset": true, 00:15:25.716 "nvme_admin": false, 00:15:25.716 "nvme_io": false, 00:15:25.716 "nvme_io_md": false, 00:15:25.716 "write_zeroes": true, 00:15:25.716 "zcopy": true, 00:15:25.716 "get_zone_info": false, 00:15:25.716 "zone_management": false, 00:15:25.716 "zone_append": false, 00:15:25.716 "compare": false, 00:15:25.716 "compare_and_write": false, 00:15:25.716 "abort": true, 00:15:25.716 "seek_hole": false, 00:15:25.716 "seek_data": false, 00:15:25.716 "copy": true, 00:15:25.716 "nvme_iov_md": false 00:15:25.716 }, 00:15:25.716 "memory_domains": [ 00:15:25.716 { 00:15:25.716 "dma_device_id": "system", 00:15:25.716 "dma_device_type": 1 00:15:25.716 }, 00:15:25.716 { 00:15:25.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.716 "dma_device_type": 2 00:15:25.716 } 00:15:25.716 ], 00:15:25.716 "driver_specific": {} 00:15:25.716 } 00:15:25.716 ] 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.716 [2024-11-04 14:47:55.570501] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.716 [2024-11-04 14:47:55.570790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.716 [2024-11-04 14:47:55.570855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.716 [2024-11-04 14:47:55.573619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.716 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.974 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.974 "name": "Existed_Raid", 00:15:25.974 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:25.974 "strip_size_kb": 64, 00:15:25.974 "state": "configuring", 00:15:25.974 "raid_level": "raid0", 00:15:25.974 "superblock": true, 00:15:25.974 "num_base_bdevs": 3, 00:15:25.974 "num_base_bdevs_discovered": 2, 00:15:25.974 "num_base_bdevs_operational": 3, 00:15:25.974 "base_bdevs_list": [ 00:15:25.974 { 00:15:25.974 "name": "BaseBdev1", 00:15:25.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.974 "is_configured": false, 00:15:25.974 "data_offset": 0, 00:15:25.974 "data_size": 0 00:15:25.974 }, 00:15:25.974 { 00:15:25.974 "name": "BaseBdev2", 00:15:25.974 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:25.974 "is_configured": true, 00:15:25.974 "data_offset": 2048, 00:15:25.974 "data_size": 63488 00:15:25.974 }, 00:15:25.974 { 00:15:25.974 "name": "BaseBdev3", 00:15:25.974 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:25.974 "is_configured": true, 00:15:25.974 "data_offset": 2048, 00:15:25.974 "data_size": 63488 00:15:25.974 } 00:15:25.974 ] 00:15:25.974 }' 00:15:25.974 14:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.974 14:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.541 [2024-11-04 14:47:56.130888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.541 "name": "Existed_Raid", 00:15:26.541 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:26.541 "strip_size_kb": 64, 00:15:26.541 "state": "configuring", 00:15:26.541 "raid_level": "raid0", 00:15:26.541 "superblock": true, 00:15:26.541 "num_base_bdevs": 3, 00:15:26.541 "num_base_bdevs_discovered": 1, 00:15:26.541 "num_base_bdevs_operational": 3, 00:15:26.541 "base_bdevs_list": [ 00:15:26.541 { 00:15:26.541 "name": "BaseBdev1", 00:15:26.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.541 "is_configured": false, 00:15:26.541 "data_offset": 0, 00:15:26.541 "data_size": 0 00:15:26.541 }, 00:15:26.541 { 00:15:26.541 "name": null, 00:15:26.541 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:26.541 "is_configured": false, 00:15:26.541 "data_offset": 0, 00:15:26.541 "data_size": 63488 00:15:26.541 }, 00:15:26.541 { 00:15:26.541 "name": "BaseBdev3", 00:15:26.541 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:26.541 "is_configured": true, 00:15:26.541 "data_offset": 2048, 00:15:26.541 "data_size": 63488 00:15:26.541 } 00:15:26.541 ] 00:15:26.541 }' 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.541 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.798 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.798 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.798 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.798 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:26.798 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.799 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:26.799 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:26.799 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.799 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.055 [2024-11-04 14:47:56.692858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.055 BaseBdev1 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:27.055 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.056 [ 00:15:27.056 { 00:15:27.056 "name": "BaseBdev1", 00:15:27.056 "aliases": [ 00:15:27.056 "72d04789-0ad4-4f3f-9336-d4298b4bcd43" 00:15:27.056 ], 00:15:27.056 "product_name": "Malloc disk", 00:15:27.056 "block_size": 512, 00:15:27.056 "num_blocks": 65536, 00:15:27.056 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:27.056 "assigned_rate_limits": { 00:15:27.056 "rw_ios_per_sec": 0, 00:15:27.056 "rw_mbytes_per_sec": 0, 00:15:27.056 "r_mbytes_per_sec": 0, 00:15:27.056 "w_mbytes_per_sec": 0 00:15:27.056 }, 00:15:27.056 "claimed": true, 00:15:27.056 "claim_type": "exclusive_write", 00:15:27.056 "zoned": false, 00:15:27.056 "supported_io_types": { 00:15:27.056 "read": true, 00:15:27.056 "write": true, 00:15:27.056 "unmap": true, 00:15:27.056 "flush": true, 00:15:27.056 "reset": true, 00:15:27.056 "nvme_admin": false, 00:15:27.056 "nvme_io": false, 00:15:27.056 "nvme_io_md": false, 00:15:27.056 "write_zeroes": true, 00:15:27.056 "zcopy": true, 00:15:27.056 "get_zone_info": false, 00:15:27.056 "zone_management": false, 00:15:27.056 "zone_append": false, 00:15:27.056 "compare": false, 00:15:27.056 "compare_and_write": false, 00:15:27.056 "abort": true, 00:15:27.056 "seek_hole": false, 00:15:27.056 "seek_data": false, 00:15:27.056 "copy": true, 00:15:27.056 "nvme_iov_md": false 00:15:27.056 }, 00:15:27.056 "memory_domains": [ 00:15:27.056 { 00:15:27.056 "dma_device_id": "system", 00:15:27.056 "dma_device_type": 1 00:15:27.056 }, 00:15:27.056 { 00:15:27.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.056 "dma_device_type": 2 00:15:27.056 } 00:15:27.056 ], 00:15:27.056 "driver_specific": {} 00:15:27.056 } 00:15:27.056 ] 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.056 "name": "Existed_Raid", 00:15:27.056 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:27.056 "strip_size_kb": 64, 00:15:27.056 "state": "configuring", 00:15:27.056 "raid_level": "raid0", 00:15:27.056 "superblock": true, 00:15:27.056 "num_base_bdevs": 3, 00:15:27.056 "num_base_bdevs_discovered": 2, 00:15:27.056 "num_base_bdevs_operational": 3, 00:15:27.056 "base_bdevs_list": [ 00:15:27.056 { 00:15:27.056 "name": "BaseBdev1", 00:15:27.056 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:27.056 "is_configured": true, 00:15:27.056 "data_offset": 2048, 00:15:27.056 "data_size": 63488 00:15:27.056 }, 00:15:27.056 { 00:15:27.056 "name": null, 00:15:27.056 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:27.056 "is_configured": false, 00:15:27.056 "data_offset": 0, 00:15:27.056 "data_size": 63488 00:15:27.056 }, 00:15:27.056 { 00:15:27.056 "name": "BaseBdev3", 00:15:27.056 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:27.056 "is_configured": true, 00:15:27.056 "data_offset": 2048, 00:15:27.056 "data_size": 63488 00:15:27.056 } 00:15:27.056 ] 00:15:27.056 }' 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.056 14:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.622 [2024-11-04 14:47:57.297155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.622 "name": "Existed_Raid", 00:15:27.622 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:27.622 "strip_size_kb": 64, 00:15:27.622 "state": "configuring", 00:15:27.622 "raid_level": "raid0", 00:15:27.622 "superblock": true, 00:15:27.622 "num_base_bdevs": 3, 00:15:27.622 "num_base_bdevs_discovered": 1, 00:15:27.622 "num_base_bdevs_operational": 3, 00:15:27.622 "base_bdevs_list": [ 00:15:27.622 { 00:15:27.622 "name": "BaseBdev1", 00:15:27.622 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:27.622 "is_configured": true, 00:15:27.622 "data_offset": 2048, 00:15:27.622 "data_size": 63488 00:15:27.622 }, 00:15:27.622 { 00:15:27.622 "name": null, 00:15:27.622 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:27.622 "is_configured": false, 00:15:27.622 "data_offset": 0, 00:15:27.622 "data_size": 63488 00:15:27.622 }, 00:15:27.622 { 00:15:27.622 "name": null, 00:15:27.622 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:27.622 "is_configured": false, 00:15:27.622 "data_offset": 0, 00:15:27.622 "data_size": 63488 00:15:27.622 } 00:15:27.622 ] 00:15:27.622 }' 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.622 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.188 [2024-11-04 14:47:57.865371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.188 "name": "Existed_Raid", 00:15:28.188 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:28.188 "strip_size_kb": 64, 00:15:28.188 "state": "configuring", 00:15:28.188 "raid_level": "raid0", 00:15:28.188 "superblock": true, 00:15:28.188 "num_base_bdevs": 3, 00:15:28.188 "num_base_bdevs_discovered": 2, 00:15:28.188 "num_base_bdevs_operational": 3, 00:15:28.188 "base_bdevs_list": [ 00:15:28.188 { 00:15:28.188 "name": "BaseBdev1", 00:15:28.188 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:28.188 "is_configured": true, 00:15:28.188 "data_offset": 2048, 00:15:28.188 "data_size": 63488 00:15:28.188 }, 00:15:28.188 { 00:15:28.188 "name": null, 00:15:28.188 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:28.188 "is_configured": false, 00:15:28.188 "data_offset": 0, 00:15:28.188 "data_size": 63488 00:15:28.188 }, 00:15:28.188 { 00:15:28.188 "name": "BaseBdev3", 00:15:28.188 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:28.188 "is_configured": true, 00:15:28.188 "data_offset": 2048, 00:15:28.188 "data_size": 63488 00:15:28.188 } 00:15:28.188 ] 00:15:28.188 }' 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.188 14:47:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.757 [2024-11-04 14:47:58.437548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.757 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.758 "name": "Existed_Raid", 00:15:28.758 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:28.758 "strip_size_kb": 64, 00:15:28.758 "state": "configuring", 00:15:28.758 "raid_level": "raid0", 00:15:28.758 "superblock": true, 00:15:28.758 "num_base_bdevs": 3, 00:15:28.758 "num_base_bdevs_discovered": 1, 00:15:28.758 "num_base_bdevs_operational": 3, 00:15:28.758 "base_bdevs_list": [ 00:15:28.758 { 00:15:28.758 "name": null, 00:15:28.758 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:28.758 "is_configured": false, 00:15:28.758 "data_offset": 0, 00:15:28.758 "data_size": 63488 00:15:28.758 }, 00:15:28.758 { 00:15:28.758 "name": null, 00:15:28.758 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:28.758 "is_configured": false, 00:15:28.758 "data_offset": 0, 00:15:28.758 "data_size": 63488 00:15:28.758 }, 00:15:28.758 { 00:15:28.758 "name": "BaseBdev3", 00:15:28.758 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:28.758 "is_configured": true, 00:15:28.758 "data_offset": 2048, 00:15:28.758 "data_size": 63488 00:15:28.758 } 00:15:28.758 ] 00:15:28.758 }' 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.758 14:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 [2024-11-04 14:47:59.118794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.330 "name": "Existed_Raid", 00:15:29.330 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:29.330 "strip_size_kb": 64, 00:15:29.330 "state": "configuring", 00:15:29.330 "raid_level": "raid0", 00:15:29.330 "superblock": true, 00:15:29.330 "num_base_bdevs": 3, 00:15:29.330 "num_base_bdevs_discovered": 2, 00:15:29.330 "num_base_bdevs_operational": 3, 00:15:29.330 "base_bdevs_list": [ 00:15:29.330 { 00:15:29.330 "name": null, 00:15:29.330 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:29.330 "is_configured": false, 00:15:29.330 "data_offset": 0, 00:15:29.330 "data_size": 63488 00:15:29.330 }, 00:15:29.330 { 00:15:29.330 "name": "BaseBdev2", 00:15:29.330 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:29.330 "is_configured": true, 00:15:29.330 "data_offset": 2048, 00:15:29.330 "data_size": 63488 00:15:29.330 }, 00:15:29.330 { 00:15:29.330 "name": "BaseBdev3", 00:15:29.330 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:29.330 "is_configured": true, 00:15:29.330 "data_offset": 2048, 00:15:29.330 "data_size": 63488 00:15:29.330 } 00:15:29.330 ] 00:15:29.330 }' 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.330 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72d04789-0ad4-4f3f-9336-d4298b4bcd43 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.896 [2024-11-04 14:47:59.766550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:29.896 [2024-11-04 14:47:59.766896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:29.896 [2024-11-04 14:47:59.766922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:29.896 [2024-11-04 14:47:59.767247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:29.896 NewBaseBdev 00:15:29.896 [2024-11-04 14:47:59.767483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:29.896 [2024-11-04 14:47:59.767500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:29.896 [2024-11-04 14:47:59.767683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.896 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.154 [ 00:15:30.154 { 00:15:30.154 "name": "NewBaseBdev", 00:15:30.154 "aliases": [ 00:15:30.154 "72d04789-0ad4-4f3f-9336-d4298b4bcd43" 00:15:30.154 ], 00:15:30.154 "product_name": "Malloc disk", 00:15:30.154 "block_size": 512, 00:15:30.154 "num_blocks": 65536, 00:15:30.154 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:30.154 "assigned_rate_limits": { 00:15:30.154 "rw_ios_per_sec": 0, 00:15:30.154 "rw_mbytes_per_sec": 0, 00:15:30.154 "r_mbytes_per_sec": 0, 00:15:30.154 "w_mbytes_per_sec": 0 00:15:30.154 }, 00:15:30.154 "claimed": true, 00:15:30.154 "claim_type": "exclusive_write", 00:15:30.154 "zoned": false, 00:15:30.154 "supported_io_types": { 00:15:30.154 "read": true, 00:15:30.154 "write": true, 00:15:30.154 "unmap": true, 00:15:30.154 "flush": true, 00:15:30.154 "reset": true, 00:15:30.154 "nvme_admin": false, 00:15:30.154 "nvme_io": false, 00:15:30.154 "nvme_io_md": false, 00:15:30.154 "write_zeroes": true, 00:15:30.154 "zcopy": true, 00:15:30.154 "get_zone_info": false, 00:15:30.154 "zone_management": false, 00:15:30.154 "zone_append": false, 00:15:30.154 "compare": false, 00:15:30.154 "compare_and_write": false, 00:15:30.154 "abort": true, 00:15:30.154 "seek_hole": false, 00:15:30.154 "seek_data": false, 00:15:30.154 "copy": true, 00:15:30.154 "nvme_iov_md": false 00:15:30.154 }, 00:15:30.154 "memory_domains": [ 00:15:30.154 { 00:15:30.154 "dma_device_id": "system", 00:15:30.154 "dma_device_type": 1 00:15:30.154 }, 00:15:30.154 { 00:15:30.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.154 "dma_device_type": 2 00:15:30.154 } 00:15:30.154 ], 00:15:30.154 "driver_specific": {} 00:15:30.154 } 00:15:30.154 ] 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.154 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.155 "name": "Existed_Raid", 00:15:30.155 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:30.155 "strip_size_kb": 64, 00:15:30.155 "state": "online", 00:15:30.155 "raid_level": "raid0", 00:15:30.155 "superblock": true, 00:15:30.155 "num_base_bdevs": 3, 00:15:30.155 "num_base_bdevs_discovered": 3, 00:15:30.155 "num_base_bdevs_operational": 3, 00:15:30.155 "base_bdevs_list": [ 00:15:30.155 { 00:15:30.155 "name": "NewBaseBdev", 00:15:30.155 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:30.155 "is_configured": true, 00:15:30.155 "data_offset": 2048, 00:15:30.155 "data_size": 63488 00:15:30.155 }, 00:15:30.155 { 00:15:30.155 "name": "BaseBdev2", 00:15:30.155 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:30.155 "is_configured": true, 00:15:30.155 "data_offset": 2048, 00:15:30.155 "data_size": 63488 00:15:30.155 }, 00:15:30.155 { 00:15:30.155 "name": "BaseBdev3", 00:15:30.155 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:30.155 "is_configured": true, 00:15:30.155 "data_offset": 2048, 00:15:30.155 "data_size": 63488 00:15:30.155 } 00:15:30.155 ] 00:15:30.155 }' 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.155 14:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.412 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:30.413 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:30.413 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:30.413 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:30.413 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:30.413 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.671 [2024-11-04 14:48:00.311168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:30.671 "name": "Existed_Raid", 00:15:30.671 "aliases": [ 00:15:30.671 "cb16669c-ddb7-49fa-9327-84aab125220a" 00:15:30.671 ], 00:15:30.671 "product_name": "Raid Volume", 00:15:30.671 "block_size": 512, 00:15:30.671 "num_blocks": 190464, 00:15:30.671 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:30.671 "assigned_rate_limits": { 00:15:30.671 "rw_ios_per_sec": 0, 00:15:30.671 "rw_mbytes_per_sec": 0, 00:15:30.671 "r_mbytes_per_sec": 0, 00:15:30.671 "w_mbytes_per_sec": 0 00:15:30.671 }, 00:15:30.671 "claimed": false, 00:15:30.671 "zoned": false, 00:15:30.671 "supported_io_types": { 00:15:30.671 "read": true, 00:15:30.671 "write": true, 00:15:30.671 "unmap": true, 00:15:30.671 "flush": true, 00:15:30.671 "reset": true, 00:15:30.671 "nvme_admin": false, 00:15:30.671 "nvme_io": false, 00:15:30.671 "nvme_io_md": false, 00:15:30.671 "write_zeroes": true, 00:15:30.671 "zcopy": false, 00:15:30.671 "get_zone_info": false, 00:15:30.671 "zone_management": false, 00:15:30.671 "zone_append": false, 00:15:30.671 "compare": false, 00:15:30.671 "compare_and_write": false, 00:15:30.671 "abort": false, 00:15:30.671 "seek_hole": false, 00:15:30.671 "seek_data": false, 00:15:30.671 "copy": false, 00:15:30.671 "nvme_iov_md": false 00:15:30.671 }, 00:15:30.671 "memory_domains": [ 00:15:30.671 { 00:15:30.671 "dma_device_id": "system", 00:15:30.671 "dma_device_type": 1 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.671 "dma_device_type": 2 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "dma_device_id": "system", 00:15:30.671 "dma_device_type": 1 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.671 "dma_device_type": 2 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "dma_device_id": "system", 00:15:30.671 "dma_device_type": 1 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.671 "dma_device_type": 2 00:15:30.671 } 00:15:30.671 ], 00:15:30.671 "driver_specific": { 00:15:30.671 "raid": { 00:15:30.671 "uuid": "cb16669c-ddb7-49fa-9327-84aab125220a", 00:15:30.671 "strip_size_kb": 64, 00:15:30.671 "state": "online", 00:15:30.671 "raid_level": "raid0", 00:15:30.671 "superblock": true, 00:15:30.671 "num_base_bdevs": 3, 00:15:30.671 "num_base_bdevs_discovered": 3, 00:15:30.671 "num_base_bdevs_operational": 3, 00:15:30.671 "base_bdevs_list": [ 00:15:30.671 { 00:15:30.671 "name": "NewBaseBdev", 00:15:30.671 "uuid": "72d04789-0ad4-4f3f-9336-d4298b4bcd43", 00:15:30.671 "is_configured": true, 00:15:30.671 "data_offset": 2048, 00:15:30.671 "data_size": 63488 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "name": "BaseBdev2", 00:15:30.671 "uuid": "3fc20ba2-aae2-4e1e-ac7d-c1052c14ab13", 00:15:30.671 "is_configured": true, 00:15:30.671 "data_offset": 2048, 00:15:30.671 "data_size": 63488 00:15:30.671 }, 00:15:30.671 { 00:15:30.671 "name": "BaseBdev3", 00:15:30.671 "uuid": "e590756b-6589-4857-9508-6afe7b0ed400", 00:15:30.671 "is_configured": true, 00:15:30.671 "data_offset": 2048, 00:15:30.671 "data_size": 63488 00:15:30.671 } 00:15:30.671 ] 00:15:30.671 } 00:15:30.671 } 00:15:30.671 }' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:30.671 BaseBdev2 00:15:30.671 BaseBdev3' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.671 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.929 [2024-11-04 14:48:00.598825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.929 [2024-11-04 14:48:00.598980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.929 [2024-11-04 14:48:00.599109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.929 [2024-11-04 14:48:00.599207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.929 [2024-11-04 14:48:00.599246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64543 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64543 ']' 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64543 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64543 00:15:30.929 killing process with pid 64543 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64543' 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64543 00:15:30.929 [2024-11-04 14:48:00.638417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.929 14:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64543 00:15:31.187 [2024-11-04 14:48:00.924999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.559 14:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:32.559 00:15:32.559 real 0m12.015s 00:15:32.559 user 0m19.727s 00:15:32.559 sys 0m1.720s 00:15:32.559 14:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.559 14:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.559 ************************************ 00:15:32.559 END TEST raid_state_function_test_sb 00:15:32.559 ************************************ 00:15:32.559 14:48:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:32.559 14:48:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:32.559 14:48:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.559 14:48:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.559 ************************************ 00:15:32.559 START TEST raid_superblock_test 00:15:32.559 ************************************ 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65178 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65178 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65178 ']' 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.559 14:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.559 [2024-11-04 14:48:02.251596] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:32.560 [2024-11-04 14:48:02.252060] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65178 ] 00:15:32.560 [2024-11-04 14:48:02.444565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.818 [2024-11-04 14:48:02.619813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.075 [2024-11-04 14:48:02.866476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.075 [2024-11-04 14:48:02.866567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.641 malloc1 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.641 [2024-11-04 14:48:03.322381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:33.641 [2024-11-04 14:48:03.322633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.641 [2024-11-04 14:48:03.322718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.641 [2024-11-04 14:48:03.322856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.641 [2024-11-04 14:48:03.326228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.641 [2024-11-04 14:48:03.326447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:33.641 pt1 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.641 malloc2 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.641 [2024-11-04 14:48:03.384151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.641 [2024-11-04 14:48:03.384255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.641 [2024-11-04 14:48:03.384295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.641 [2024-11-04 14:48:03.384311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.641 [2024-11-04 14:48:03.387436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.641 [2024-11-04 14:48:03.387485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.641 pt2 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.641 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.641 malloc3 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.642 [2024-11-04 14:48:03.454376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:33.642 [2024-11-04 14:48:03.454443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.642 [2024-11-04 14:48:03.454478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.642 [2024-11-04 14:48:03.454494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.642 [2024-11-04 14:48:03.457595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.642 [2024-11-04 14:48:03.457778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:33.642 pt3 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.642 [2024-11-04 14:48:03.466521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:33.642 [2024-11-04 14:48:03.469149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.642 [2024-11-04 14:48:03.469422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:33.642 [2024-11-04 14:48:03.469675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:33.642 [2024-11-04 14:48:03.469700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:33.642 [2024-11-04 14:48:03.470032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:33.642 [2024-11-04 14:48:03.470301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:33.642 [2024-11-04 14:48:03.470319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:33.642 [2024-11-04 14:48:03.470557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.642 "name": "raid_bdev1", 00:15:33.642 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:33.642 "strip_size_kb": 64, 00:15:33.642 "state": "online", 00:15:33.642 "raid_level": "raid0", 00:15:33.642 "superblock": true, 00:15:33.642 "num_base_bdevs": 3, 00:15:33.642 "num_base_bdevs_discovered": 3, 00:15:33.642 "num_base_bdevs_operational": 3, 00:15:33.642 "base_bdevs_list": [ 00:15:33.642 { 00:15:33.642 "name": "pt1", 00:15:33.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.642 "is_configured": true, 00:15:33.642 "data_offset": 2048, 00:15:33.642 "data_size": 63488 00:15:33.642 }, 00:15:33.642 { 00:15:33.642 "name": "pt2", 00:15:33.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.642 "is_configured": true, 00:15:33.642 "data_offset": 2048, 00:15:33.642 "data_size": 63488 00:15:33.642 }, 00:15:33.642 { 00:15:33.642 "name": "pt3", 00:15:33.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.642 "is_configured": true, 00:15:33.642 "data_offset": 2048, 00:15:33.642 "data_size": 63488 00:15:33.642 } 00:15:33.642 ] 00:15:33.642 }' 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.642 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.207 [2024-11-04 14:48:03.955100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.207 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.207 "name": "raid_bdev1", 00:15:34.207 "aliases": [ 00:15:34.207 "0eceb814-554f-486f-8f9e-9bd857afd280" 00:15:34.207 ], 00:15:34.207 "product_name": "Raid Volume", 00:15:34.207 "block_size": 512, 00:15:34.207 "num_blocks": 190464, 00:15:34.207 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:34.207 "assigned_rate_limits": { 00:15:34.207 "rw_ios_per_sec": 0, 00:15:34.207 "rw_mbytes_per_sec": 0, 00:15:34.207 "r_mbytes_per_sec": 0, 00:15:34.207 "w_mbytes_per_sec": 0 00:15:34.207 }, 00:15:34.207 "claimed": false, 00:15:34.207 "zoned": false, 00:15:34.207 "supported_io_types": { 00:15:34.207 "read": true, 00:15:34.207 "write": true, 00:15:34.207 "unmap": true, 00:15:34.207 "flush": true, 00:15:34.207 "reset": true, 00:15:34.207 "nvme_admin": false, 00:15:34.207 "nvme_io": false, 00:15:34.207 "nvme_io_md": false, 00:15:34.207 "write_zeroes": true, 00:15:34.207 "zcopy": false, 00:15:34.207 "get_zone_info": false, 00:15:34.207 "zone_management": false, 00:15:34.207 "zone_append": false, 00:15:34.207 "compare": false, 00:15:34.207 "compare_and_write": false, 00:15:34.207 "abort": false, 00:15:34.207 "seek_hole": false, 00:15:34.207 "seek_data": false, 00:15:34.207 "copy": false, 00:15:34.207 "nvme_iov_md": false 00:15:34.207 }, 00:15:34.207 "memory_domains": [ 00:15:34.207 { 00:15:34.207 "dma_device_id": "system", 00:15:34.207 "dma_device_type": 1 00:15:34.207 }, 00:15:34.207 { 00:15:34.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.207 "dma_device_type": 2 00:15:34.207 }, 00:15:34.207 { 00:15:34.207 "dma_device_id": "system", 00:15:34.207 "dma_device_type": 1 00:15:34.207 }, 00:15:34.207 { 00:15:34.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.207 "dma_device_type": 2 00:15:34.207 }, 00:15:34.207 { 00:15:34.207 "dma_device_id": "system", 00:15:34.207 "dma_device_type": 1 00:15:34.207 }, 00:15:34.207 { 00:15:34.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.207 "dma_device_type": 2 00:15:34.207 } 00:15:34.207 ], 00:15:34.207 "driver_specific": { 00:15:34.207 "raid": { 00:15:34.207 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:34.207 "strip_size_kb": 64, 00:15:34.207 "state": "online", 00:15:34.207 "raid_level": "raid0", 00:15:34.207 "superblock": true, 00:15:34.207 "num_base_bdevs": 3, 00:15:34.207 "num_base_bdevs_discovered": 3, 00:15:34.207 "num_base_bdevs_operational": 3, 00:15:34.207 "base_bdevs_list": [ 00:15:34.207 { 00:15:34.207 "name": "pt1", 00:15:34.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.208 "is_configured": true, 00:15:34.208 "data_offset": 2048, 00:15:34.208 "data_size": 63488 00:15:34.208 }, 00:15:34.208 { 00:15:34.208 "name": "pt2", 00:15:34.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.208 "is_configured": true, 00:15:34.208 "data_offset": 2048, 00:15:34.208 "data_size": 63488 00:15:34.208 }, 00:15:34.208 { 00:15:34.208 "name": "pt3", 00:15:34.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.208 "is_configured": true, 00:15:34.208 "data_offset": 2048, 00:15:34.208 "data_size": 63488 00:15:34.208 } 00:15:34.208 ] 00:15:34.208 } 00:15:34.208 } 00:15:34.208 }' 00:15:34.208 14:48:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.208 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:34.208 pt2 00:15:34.208 pt3' 00:15:34.208 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 [2024-11-04 14:48:04.271078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0eceb814-554f-486f-8f9e-9bd857afd280 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0eceb814-554f-486f-8f9e-9bd857afd280 ']' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 [2024-11-04 14:48:04.318718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.466 [2024-11-04 14:48:04.318888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.466 [2024-11-04 14:48:04.319028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.466 [2024-11-04 14:48:04.319122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.466 [2024-11-04 14:48:04.319140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.466 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 [2024-11-04 14:48:04.470837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:34.725 [2024-11-04 14:48:04.473547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:34.725 [2024-11-04 14:48:04.473628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:34.725 [2024-11-04 14:48:04.473710] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:34.725 [2024-11-04 14:48:04.473792] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:34.725 [2024-11-04 14:48:04.473829] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:34.725 [2024-11-04 14:48:04.473859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.725 [2024-11-04 14:48:04.473876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:34.725 request: 00:15:34.725 { 00:15:34.725 "name": "raid_bdev1", 00:15:34.725 "raid_level": "raid0", 00:15:34.725 "base_bdevs": [ 00:15:34.725 "malloc1", 00:15:34.725 "malloc2", 00:15:34.725 "malloc3" 00:15:34.725 ], 00:15:34.725 "strip_size_kb": 64, 00:15:34.725 "superblock": false, 00:15:34.725 "method": "bdev_raid_create", 00:15:34.725 "req_id": 1 00:15:34.725 } 00:15:34.725 Got JSON-RPC error response 00:15:34.725 response: 00:15:34.725 { 00:15:34.725 "code": -17, 00:15:34.725 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:34.725 } 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.725 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 [2024-11-04 14:48:04.538764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:34.725 [2024-11-04 14:48:04.538968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.725 [2024-11-04 14:48:04.539016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:34.725 [2024-11-04 14:48:04.539033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.725 [2024-11-04 14:48:04.542276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.725 [2024-11-04 14:48:04.542321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:34.725 [2024-11-04 14:48:04.542443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:34.726 [2024-11-04 14:48:04.542520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:34.726 pt1 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.726 "name": "raid_bdev1", 00:15:34.726 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:34.726 "strip_size_kb": 64, 00:15:34.726 "state": "configuring", 00:15:34.726 "raid_level": "raid0", 00:15:34.726 "superblock": true, 00:15:34.726 "num_base_bdevs": 3, 00:15:34.726 "num_base_bdevs_discovered": 1, 00:15:34.726 "num_base_bdevs_operational": 3, 00:15:34.726 "base_bdevs_list": [ 00:15:34.726 { 00:15:34.726 "name": "pt1", 00:15:34.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.726 "is_configured": true, 00:15:34.726 "data_offset": 2048, 00:15:34.726 "data_size": 63488 00:15:34.726 }, 00:15:34.726 { 00:15:34.726 "name": null, 00:15:34.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.726 "is_configured": false, 00:15:34.726 "data_offset": 2048, 00:15:34.726 "data_size": 63488 00:15:34.726 }, 00:15:34.726 { 00:15:34.726 "name": null, 00:15:34.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.726 "is_configured": false, 00:15:34.726 "data_offset": 2048, 00:15:34.726 "data_size": 63488 00:15:34.726 } 00:15:34.726 ] 00:15:34.726 }' 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.726 14:48:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.321 [2024-11-04 14:48:05.078987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.321 [2024-11-04 14:48:05.079076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.321 [2024-11-04 14:48:05.079117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:35.321 [2024-11-04 14:48:05.079133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.321 [2024-11-04 14:48:05.079821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.321 [2024-11-04 14:48:05.079863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.321 [2024-11-04 14:48:05.080026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:35.321 [2024-11-04 14:48:05.080063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.321 pt2 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.321 [2024-11-04 14:48:05.086926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.321 "name": "raid_bdev1", 00:15:35.321 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:35.321 "strip_size_kb": 64, 00:15:35.321 "state": "configuring", 00:15:35.321 "raid_level": "raid0", 00:15:35.321 "superblock": true, 00:15:35.321 "num_base_bdevs": 3, 00:15:35.321 "num_base_bdevs_discovered": 1, 00:15:35.321 "num_base_bdevs_operational": 3, 00:15:35.321 "base_bdevs_list": [ 00:15:35.321 { 00:15:35.321 "name": "pt1", 00:15:35.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.321 "is_configured": true, 00:15:35.321 "data_offset": 2048, 00:15:35.321 "data_size": 63488 00:15:35.321 }, 00:15:35.321 { 00:15:35.321 "name": null, 00:15:35.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.321 "is_configured": false, 00:15:35.321 "data_offset": 0, 00:15:35.321 "data_size": 63488 00:15:35.321 }, 00:15:35.321 { 00:15:35.321 "name": null, 00:15:35.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.321 "is_configured": false, 00:15:35.321 "data_offset": 2048, 00:15:35.321 "data_size": 63488 00:15:35.321 } 00:15:35.321 ] 00:15:35.321 }' 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.321 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.887 [2024-11-04 14:48:05.627130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.887 [2024-11-04 14:48:05.627265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.887 [2024-11-04 14:48:05.627305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:35.887 [2024-11-04 14:48:05.627325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.887 [2024-11-04 14:48:05.628071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.887 [2024-11-04 14:48:05.628111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.887 [2024-11-04 14:48:05.628231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:35.887 [2024-11-04 14:48:05.628306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.887 pt2 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.887 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.887 [2024-11-04 14:48:05.635070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:35.887 [2024-11-04 14:48:05.635314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.887 [2024-11-04 14:48:05.635366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:35.887 [2024-11-04 14:48:05.635388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.888 [2024-11-04 14:48:05.635913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.888 [2024-11-04 14:48:05.635947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:35.888 [2024-11-04 14:48:05.636022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:35.888 [2024-11-04 14:48:05.636055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:35.888 [2024-11-04 14:48:05.636202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:35.888 [2024-11-04 14:48:05.636224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:35.888 [2024-11-04 14:48:05.636592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:35.888 [2024-11-04 14:48:05.636822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:35.888 [2024-11-04 14:48:05.636836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:35.888 [2024-11-04 14:48:05.637015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.888 pt3 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.888 "name": "raid_bdev1", 00:15:35.888 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:35.888 "strip_size_kb": 64, 00:15:35.888 "state": "online", 00:15:35.888 "raid_level": "raid0", 00:15:35.888 "superblock": true, 00:15:35.888 "num_base_bdevs": 3, 00:15:35.888 "num_base_bdevs_discovered": 3, 00:15:35.888 "num_base_bdevs_operational": 3, 00:15:35.888 "base_bdevs_list": [ 00:15:35.888 { 00:15:35.888 "name": "pt1", 00:15:35.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.888 "is_configured": true, 00:15:35.888 "data_offset": 2048, 00:15:35.888 "data_size": 63488 00:15:35.888 }, 00:15:35.888 { 00:15:35.888 "name": "pt2", 00:15:35.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.888 "is_configured": true, 00:15:35.888 "data_offset": 2048, 00:15:35.888 "data_size": 63488 00:15:35.888 }, 00:15:35.888 { 00:15:35.888 "name": "pt3", 00:15:35.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.888 "is_configured": true, 00:15:35.888 "data_offset": 2048, 00:15:35.888 "data_size": 63488 00:15:35.888 } 00:15:35.888 ] 00:15:35.888 }' 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.888 14:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.455 [2024-11-04 14:48:06.187800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.455 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.455 "name": "raid_bdev1", 00:15:36.455 "aliases": [ 00:15:36.455 "0eceb814-554f-486f-8f9e-9bd857afd280" 00:15:36.455 ], 00:15:36.455 "product_name": "Raid Volume", 00:15:36.455 "block_size": 512, 00:15:36.455 "num_blocks": 190464, 00:15:36.455 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:36.455 "assigned_rate_limits": { 00:15:36.455 "rw_ios_per_sec": 0, 00:15:36.455 "rw_mbytes_per_sec": 0, 00:15:36.455 "r_mbytes_per_sec": 0, 00:15:36.455 "w_mbytes_per_sec": 0 00:15:36.455 }, 00:15:36.455 "claimed": false, 00:15:36.455 "zoned": false, 00:15:36.455 "supported_io_types": { 00:15:36.455 "read": true, 00:15:36.455 "write": true, 00:15:36.455 "unmap": true, 00:15:36.455 "flush": true, 00:15:36.455 "reset": true, 00:15:36.455 "nvme_admin": false, 00:15:36.455 "nvme_io": false, 00:15:36.455 "nvme_io_md": false, 00:15:36.455 "write_zeroes": true, 00:15:36.455 "zcopy": false, 00:15:36.455 "get_zone_info": false, 00:15:36.455 "zone_management": false, 00:15:36.455 "zone_append": false, 00:15:36.455 "compare": false, 00:15:36.455 "compare_and_write": false, 00:15:36.455 "abort": false, 00:15:36.455 "seek_hole": false, 00:15:36.455 "seek_data": false, 00:15:36.455 "copy": false, 00:15:36.455 "nvme_iov_md": false 00:15:36.455 }, 00:15:36.455 "memory_domains": [ 00:15:36.455 { 00:15:36.455 "dma_device_id": "system", 00:15:36.455 "dma_device_type": 1 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.455 "dma_device_type": 2 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "dma_device_id": "system", 00:15:36.455 "dma_device_type": 1 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.455 "dma_device_type": 2 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "dma_device_id": "system", 00:15:36.455 "dma_device_type": 1 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.455 "dma_device_type": 2 00:15:36.455 } 00:15:36.455 ], 00:15:36.455 "driver_specific": { 00:15:36.455 "raid": { 00:15:36.455 "uuid": "0eceb814-554f-486f-8f9e-9bd857afd280", 00:15:36.455 "strip_size_kb": 64, 00:15:36.455 "state": "online", 00:15:36.455 "raid_level": "raid0", 00:15:36.455 "superblock": true, 00:15:36.455 "num_base_bdevs": 3, 00:15:36.455 "num_base_bdevs_discovered": 3, 00:15:36.455 "num_base_bdevs_operational": 3, 00:15:36.455 "base_bdevs_list": [ 00:15:36.455 { 00:15:36.455 "name": "pt1", 00:15:36.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.455 "is_configured": true, 00:15:36.455 "data_offset": 2048, 00:15:36.455 "data_size": 63488 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "name": "pt2", 00:15:36.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.455 "is_configured": true, 00:15:36.455 "data_offset": 2048, 00:15:36.455 "data_size": 63488 00:15:36.455 }, 00:15:36.455 { 00:15:36.455 "name": "pt3", 00:15:36.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.455 "is_configured": true, 00:15:36.455 "data_offset": 2048, 00:15:36.455 "data_size": 63488 00:15:36.455 } 00:15:36.455 ] 00:15:36.455 } 00:15:36.455 } 00:15:36.455 }' 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:36.456 pt2 00:15:36.456 pt3' 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.456 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.714 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:36.715 [2024-11-04 14:48:06.511768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0eceb814-554f-486f-8f9e-9bd857afd280 '!=' 0eceb814-554f-486f-8f9e-9bd857afd280 ']' 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65178 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65178 ']' 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65178 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65178 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65178' 00:15:36.715 killing process with pid 65178 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65178 00:15:36.715 [2024-11-04 14:48:06.595292] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.715 14:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65178 00:15:36.715 [2024-11-04 14:48:06.595639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.715 [2024-11-04 14:48:06.595871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.715 [2024-11-04 14:48:06.596027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:37.281 [2024-11-04 14:48:06.895321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.260 ************************************ 00:15:38.260 END TEST raid_superblock_test 00:15:38.260 ************************************ 00:15:38.260 14:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:38.260 00:15:38.260 real 0m5.928s 00:15:38.260 user 0m8.765s 00:15:38.260 sys 0m0.943s 00:15:38.260 14:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:38.260 14:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.260 14:48:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:15:38.260 14:48:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:38.260 14:48:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:38.260 14:48:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.260 ************************************ 00:15:38.260 START TEST raid_read_error_test 00:15:38.260 ************************************ 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:38.260 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.B2IZJO8Vwq 00:15:38.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65440 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65440 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65440 ']' 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.261 14:48:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.519 [2024-11-04 14:48:08.232553] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:38.519 [2024-11-04 14:48:08.232730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65440 ] 00:15:38.777 [2024-11-04 14:48:08.416743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.777 [2024-11-04 14:48:08.578859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.036 [2024-11-04 14:48:08.818876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.036 [2024-11-04 14:48:08.819296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 BaseBdev1_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 true 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 [2024-11-04 14:48:09.307050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:39.603 [2024-11-04 14:48:09.307347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.603 [2024-11-04 14:48:09.307399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:39.603 [2024-11-04 14:48:09.307421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.603 [2024-11-04 14:48:09.310888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.603 [2024-11-04 14:48:09.311110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.603 BaseBdev1 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 BaseBdev2_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 true 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 [2024-11-04 14:48:09.379262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:39.603 [2024-11-04 14:48:09.379380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.603 [2024-11-04 14:48:09.379436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:39.603 [2024-11-04 14:48:09.379458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.603 [2024-11-04 14:48:09.382594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.603 [2024-11-04 14:48:09.382645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.603 BaseBdev2 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 BaseBdev3_malloc 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 true 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 [2024-11-04 14:48:09.459001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:39.604 [2024-11-04 14:48:09.459121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.604 [2024-11-04 14:48:09.459150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:39.604 [2024-11-04 14:48:09.459168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.604 [2024-11-04 14:48:09.462332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.604 [2024-11-04 14:48:09.462383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:39.604 BaseBdev3 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 [2024-11-04 14:48:09.467408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.604 [2024-11-04 14:48:09.471141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.604 [2024-11-04 14:48:09.471566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.604 [2024-11-04 14:48:09.472037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:39.604 [2024-11-04 14:48:09.472183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:39.604 [2024-11-04 14:48:09.472617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:39.604 [2024-11-04 14:48:09.473010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:39.604 [2024-11-04 14:48:09.473154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:39.604 [2024-11-04 14:48:09.473531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.604 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.862 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.862 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.862 "name": "raid_bdev1", 00:15:39.862 "uuid": "aece69da-0982-4a1e-967e-c8df4a1f346b", 00:15:39.862 "strip_size_kb": 64, 00:15:39.862 "state": "online", 00:15:39.862 "raid_level": "raid0", 00:15:39.862 "superblock": true, 00:15:39.862 "num_base_bdevs": 3, 00:15:39.862 "num_base_bdevs_discovered": 3, 00:15:39.862 "num_base_bdevs_operational": 3, 00:15:39.862 "base_bdevs_list": [ 00:15:39.862 { 00:15:39.862 "name": "BaseBdev1", 00:15:39.862 "uuid": "20713ddf-2267-576c-abc3-c40281c8b565", 00:15:39.862 "is_configured": true, 00:15:39.862 "data_offset": 2048, 00:15:39.862 "data_size": 63488 00:15:39.862 }, 00:15:39.862 { 00:15:39.862 "name": "BaseBdev2", 00:15:39.862 "uuid": "98b74d74-5530-57df-a902-35da617d07da", 00:15:39.862 "is_configured": true, 00:15:39.862 "data_offset": 2048, 00:15:39.862 "data_size": 63488 00:15:39.862 }, 00:15:39.862 { 00:15:39.862 "name": "BaseBdev3", 00:15:39.862 "uuid": "c8846d5a-55a7-584e-80a7-b6fc9f92f42c", 00:15:39.862 "is_configured": true, 00:15:39.862 "data_offset": 2048, 00:15:39.862 "data_size": 63488 00:15:39.862 } 00:15:39.862 ] 00:15:39.862 }' 00:15:39.862 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.862 14:48:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.120 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:40.120 14:48:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:40.378 [2024-11-04 14:48:10.121987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.315 14:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.315 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.315 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.315 14:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.315 "name": "raid_bdev1", 00:15:41.315 "uuid": "aece69da-0982-4a1e-967e-c8df4a1f346b", 00:15:41.315 "strip_size_kb": 64, 00:15:41.315 "state": "online", 00:15:41.315 "raid_level": "raid0", 00:15:41.315 "superblock": true, 00:15:41.315 "num_base_bdevs": 3, 00:15:41.315 "num_base_bdevs_discovered": 3, 00:15:41.315 "num_base_bdevs_operational": 3, 00:15:41.315 "base_bdevs_list": [ 00:15:41.315 { 00:15:41.315 "name": "BaseBdev1", 00:15:41.315 "uuid": "20713ddf-2267-576c-abc3-c40281c8b565", 00:15:41.315 "is_configured": true, 00:15:41.315 "data_offset": 2048, 00:15:41.315 "data_size": 63488 00:15:41.315 }, 00:15:41.315 { 00:15:41.315 "name": "BaseBdev2", 00:15:41.315 "uuid": "98b74d74-5530-57df-a902-35da617d07da", 00:15:41.315 "is_configured": true, 00:15:41.315 "data_offset": 2048, 00:15:41.315 "data_size": 63488 00:15:41.315 }, 00:15:41.315 { 00:15:41.315 "name": "BaseBdev3", 00:15:41.315 "uuid": "c8846d5a-55a7-584e-80a7-b6fc9f92f42c", 00:15:41.315 "is_configured": true, 00:15:41.315 "data_offset": 2048, 00:15:41.315 "data_size": 63488 00:15:41.315 } 00:15:41.315 ] 00:15:41.315 }' 00:15:41.315 14:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.315 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.881 [2024-11-04 14:48:11.545040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.881 [2024-11-04 14:48:11.545215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.881 { 00:15:41.881 "results": [ 00:15:41.881 { 00:15:41.881 "job": "raid_bdev1", 00:15:41.881 "core_mask": "0x1", 00:15:41.881 "workload": "randrw", 00:15:41.881 "percentage": 50, 00:15:41.881 "status": "finished", 00:15:41.881 "queue_depth": 1, 00:15:41.881 "io_size": 131072, 00:15:41.881 "runtime": 1.420142, 00:15:41.881 "iops": 9306.81579729351, 00:15:41.881 "mibps": 1163.3519746616887, 00:15:41.881 "io_failed": 1, 00:15:41.881 "io_timeout": 0, 00:15:41.881 "avg_latency_us": 151.71181556830217, 00:15:41.881 "min_latency_us": 38.63272727272727, 00:15:41.881 "max_latency_us": 1817.1345454545456 00:15:41.881 } 00:15:41.881 ], 00:15:41.881 "core_count": 1 00:15:41.881 } 00:15:41.881 [2024-11-04 14:48:11.548869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.881 [2024-11-04 14:48:11.548991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.881 [2024-11-04 14:48:11.549058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.881 [2024-11-04 14:48:11.549076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65440 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65440 ']' 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65440 00:15:41.881 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65440 00:15:41.882 killing process with pid 65440 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65440' 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65440 00:15:41.882 14:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65440 00:15:41.882 [2024-11-04 14:48:11.589779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.139 [2024-11-04 14:48:11.813762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.B2IZJO8Vwq 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:43.513 00:15:43.513 real 0m4.969s 00:15:43.513 user 0m6.099s 00:15:43.513 sys 0m0.644s 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.513 ************************************ 00:15:43.513 END TEST raid_read_error_test 00:15:43.513 ************************************ 00:15:43.513 14:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.513 14:48:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:15:43.513 14:48:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:43.513 14:48:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.513 14:48:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.513 ************************************ 00:15:43.513 START TEST raid_write_error_test 00:15:43.513 ************************************ 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NeEdtXzRYd 00:15:43.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65586 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65586 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65586 ']' 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.513 14:48:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.513 [2024-11-04 14:48:13.266453] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:43.513 [2024-11-04 14:48:13.266864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65586 ] 00:15:43.771 [2024-11-04 14:48:13.455109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.028 [2024-11-04 14:48:13.667886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.028 [2024-11-04 14:48:13.896904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.028 [2024-11-04 14:48:13.896996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.594 BaseBdev1_malloc 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.594 true 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.594 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.594 [2024-11-04 14:48:14.351854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:44.594 [2024-11-04 14:48:14.351927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.595 [2024-11-04 14:48:14.351957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:44.595 [2024-11-04 14:48:14.351975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.595 [2024-11-04 14:48:14.355051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.595 [2024-11-04 14:48:14.355247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:44.595 BaseBdev1 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 BaseBdev2_malloc 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 true 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 [2024-11-04 14:48:14.413436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:44.595 [2024-11-04 14:48:14.413535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.595 [2024-11-04 14:48:14.413562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:44.595 [2024-11-04 14:48:14.413580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.595 [2024-11-04 14:48:14.416633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.595 [2024-11-04 14:48:14.416681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:44.595 BaseBdev2 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 BaseBdev3_malloc 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.852 true 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.852 [2024-11-04 14:48:14.494478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:44.852 [2024-11-04 14:48:14.494551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.852 [2024-11-04 14:48:14.494579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:44.852 [2024-11-04 14:48:14.494596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.852 [2024-11-04 14:48:14.497659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.852 [2024-11-04 14:48:14.497708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:44.852 BaseBdev3 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.852 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.852 [2024-11-04 14:48:14.502692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.853 [2024-11-04 14:48:14.505400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.853 [2024-11-04 14:48:14.505523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.853 [2024-11-04 14:48:14.505792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:44.853 [2024-11-04 14:48:14.505813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:44.853 [2024-11-04 14:48:14.506125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:44.853 [2024-11-04 14:48:14.506356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:44.853 [2024-11-04 14:48:14.506382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:44.853 [2024-11-04 14:48:14.506654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.853 "name": "raid_bdev1", 00:15:44.853 "uuid": "4acc47b0-1646-4036-9a8b-3a27a7a377e4", 00:15:44.853 "strip_size_kb": 64, 00:15:44.853 "state": "online", 00:15:44.853 "raid_level": "raid0", 00:15:44.853 "superblock": true, 00:15:44.853 "num_base_bdevs": 3, 00:15:44.853 "num_base_bdevs_discovered": 3, 00:15:44.853 "num_base_bdevs_operational": 3, 00:15:44.853 "base_bdevs_list": [ 00:15:44.853 { 00:15:44.853 "name": "BaseBdev1", 00:15:44.853 "uuid": "df1ca173-5614-55fe-b597-a3b3aacd2fc3", 00:15:44.853 "is_configured": true, 00:15:44.853 "data_offset": 2048, 00:15:44.853 "data_size": 63488 00:15:44.853 }, 00:15:44.853 { 00:15:44.853 "name": "BaseBdev2", 00:15:44.853 "uuid": "64a32e85-098b-5727-9d64-1991bcff3304", 00:15:44.853 "is_configured": true, 00:15:44.853 "data_offset": 2048, 00:15:44.853 "data_size": 63488 00:15:44.853 }, 00:15:44.853 { 00:15:44.853 "name": "BaseBdev3", 00:15:44.853 "uuid": "6bc7c776-8320-5faf-baf3-b45c70265562", 00:15:44.853 "is_configured": true, 00:15:44.853 "data_offset": 2048, 00:15:44.853 "data_size": 63488 00:15:44.853 } 00:15:44.853 ] 00:15:44.853 }' 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.853 14:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 14:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:45.419 14:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:45.419 [2024-11-04 14:48:15.160684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.352 "name": "raid_bdev1", 00:15:46.352 "uuid": "4acc47b0-1646-4036-9a8b-3a27a7a377e4", 00:15:46.352 "strip_size_kb": 64, 00:15:46.352 "state": "online", 00:15:46.352 "raid_level": "raid0", 00:15:46.352 "superblock": true, 00:15:46.352 "num_base_bdevs": 3, 00:15:46.352 "num_base_bdevs_discovered": 3, 00:15:46.352 "num_base_bdevs_operational": 3, 00:15:46.352 "base_bdevs_list": [ 00:15:46.352 { 00:15:46.352 "name": "BaseBdev1", 00:15:46.352 "uuid": "df1ca173-5614-55fe-b597-a3b3aacd2fc3", 00:15:46.352 "is_configured": true, 00:15:46.352 "data_offset": 2048, 00:15:46.352 "data_size": 63488 00:15:46.352 }, 00:15:46.352 { 00:15:46.352 "name": "BaseBdev2", 00:15:46.352 "uuid": "64a32e85-098b-5727-9d64-1991bcff3304", 00:15:46.352 "is_configured": true, 00:15:46.352 "data_offset": 2048, 00:15:46.352 "data_size": 63488 00:15:46.352 }, 00:15:46.352 { 00:15:46.352 "name": "BaseBdev3", 00:15:46.352 "uuid": "6bc7c776-8320-5faf-baf3-b45c70265562", 00:15:46.352 "is_configured": true, 00:15:46.352 "data_offset": 2048, 00:15:46.352 "data_size": 63488 00:15:46.352 } 00:15:46.352 ] 00:15:46.352 }' 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.352 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.918 [2024-11-04 14:48:16.575251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.918 [2024-11-04 14:48:16.575429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.918 [2024-11-04 14:48:16.579157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.918 { 00:15:46.918 "results": [ 00:15:46.918 { 00:15:46.918 "job": "raid_bdev1", 00:15:46.918 "core_mask": "0x1", 00:15:46.918 "workload": "randrw", 00:15:46.918 "percentage": 50, 00:15:46.918 "status": "finished", 00:15:46.918 "queue_depth": 1, 00:15:46.918 "io_size": 131072, 00:15:46.918 "runtime": 1.412122, 00:15:46.918 "iops": 9693.921630000807, 00:15:46.918 "mibps": 1211.7402037501008, 00:15:46.918 "io_failed": 1, 00:15:46.918 "io_timeout": 0, 00:15:46.918 "avg_latency_us": 145.48436310511988, 00:15:46.918 "min_latency_us": 31.650909090909092, 00:15:46.918 "max_latency_us": 1951.1854545454546 00:15:46.918 } 00:15:46.918 ], 00:15:46.918 "core_count": 1 00:15:46.918 } 00:15:46.918 [2024-11-04 14:48:16.579423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.918 [2024-11-04 14:48:16.579502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.918 [2024-11-04 14:48:16.579520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65586 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65586 ']' 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65586 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65586 00:15:46.918 killing process with pid 65586 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65586' 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65586 00:15:46.918 [2024-11-04 14:48:16.615899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.918 14:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65586 00:15:47.176 [2024-11-04 14:48:16.843236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NeEdtXzRYd 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:48.552 00:15:48.552 real 0m4.943s 00:15:48.552 user 0m6.050s 00:15:48.552 sys 0m0.672s 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:48.552 14:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.552 ************************************ 00:15:48.552 END TEST raid_write_error_test 00:15:48.552 ************************************ 00:15:48.552 14:48:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:48.552 14:48:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:48.552 14:48:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:48.552 14:48:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:48.552 14:48:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.552 ************************************ 00:15:48.552 START TEST raid_state_function_test 00:15:48.552 ************************************ 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65735 00:15:48.552 Process raid pid: 65735 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65735' 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65735 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65735 ']' 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:48.552 14:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.552 [2024-11-04 14:48:18.258063] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:15:48.552 [2024-11-04 14:48:18.258280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.811 [2024-11-04 14:48:18.446126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.811 [2024-11-04 14:48:18.604379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.068 [2024-11-04 14:48:18.870782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.068 [2024-11-04 14:48:18.870868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.634 [2024-11-04 14:48:19.299497] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.634 [2024-11-04 14:48:19.299581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.634 [2024-11-04 14:48:19.299629] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.634 [2024-11-04 14:48:19.299663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.634 [2024-11-04 14:48:19.299673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.634 [2024-11-04 14:48:19.299688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.634 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.634 "name": "Existed_Raid", 00:15:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.634 "strip_size_kb": 64, 00:15:49.634 "state": "configuring", 00:15:49.634 "raid_level": "concat", 00:15:49.634 "superblock": false, 00:15:49.634 "num_base_bdevs": 3, 00:15:49.634 "num_base_bdevs_discovered": 0, 00:15:49.634 "num_base_bdevs_operational": 3, 00:15:49.634 "base_bdevs_list": [ 00:15:49.634 { 00:15:49.634 "name": "BaseBdev1", 00:15:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.634 "is_configured": false, 00:15:49.634 "data_offset": 0, 00:15:49.634 "data_size": 0 00:15:49.634 }, 00:15:49.634 { 00:15:49.634 "name": "BaseBdev2", 00:15:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.634 "is_configured": false, 00:15:49.634 "data_offset": 0, 00:15:49.634 "data_size": 0 00:15:49.635 }, 00:15:49.635 { 00:15:49.635 "name": "BaseBdev3", 00:15:49.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.635 "is_configured": false, 00:15:49.635 "data_offset": 0, 00:15:49.635 "data_size": 0 00:15:49.635 } 00:15:49.635 ] 00:15:49.635 }' 00:15:49.635 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.635 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.210 [2024-11-04 14:48:19.803654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.210 [2024-11-04 14:48:19.803708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.210 [2024-11-04 14:48:19.811629] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.210 [2024-11-04 14:48:19.811689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.210 [2024-11-04 14:48:19.811706] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.210 [2024-11-04 14:48:19.811722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.210 [2024-11-04 14:48:19.811733] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.210 [2024-11-04 14:48:19.811748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.210 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 [2024-11-04 14:48:19.864490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.211 BaseBdev1 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 [ 00:15:50.211 { 00:15:50.211 "name": "BaseBdev1", 00:15:50.211 "aliases": [ 00:15:50.211 "8061ab19-44c6-47e0-8210-995e47942605" 00:15:50.211 ], 00:15:50.211 "product_name": "Malloc disk", 00:15:50.211 "block_size": 512, 00:15:50.211 "num_blocks": 65536, 00:15:50.211 "uuid": "8061ab19-44c6-47e0-8210-995e47942605", 00:15:50.211 "assigned_rate_limits": { 00:15:50.211 "rw_ios_per_sec": 0, 00:15:50.211 "rw_mbytes_per_sec": 0, 00:15:50.211 "r_mbytes_per_sec": 0, 00:15:50.211 "w_mbytes_per_sec": 0 00:15:50.211 }, 00:15:50.211 "claimed": true, 00:15:50.211 "claim_type": "exclusive_write", 00:15:50.211 "zoned": false, 00:15:50.211 "supported_io_types": { 00:15:50.211 "read": true, 00:15:50.211 "write": true, 00:15:50.211 "unmap": true, 00:15:50.211 "flush": true, 00:15:50.211 "reset": true, 00:15:50.211 "nvme_admin": false, 00:15:50.211 "nvme_io": false, 00:15:50.211 "nvme_io_md": false, 00:15:50.211 "write_zeroes": true, 00:15:50.211 "zcopy": true, 00:15:50.211 "get_zone_info": false, 00:15:50.211 "zone_management": false, 00:15:50.211 "zone_append": false, 00:15:50.211 "compare": false, 00:15:50.211 "compare_and_write": false, 00:15:50.211 "abort": true, 00:15:50.211 "seek_hole": false, 00:15:50.211 "seek_data": false, 00:15:50.211 "copy": true, 00:15:50.211 "nvme_iov_md": false 00:15:50.211 }, 00:15:50.211 "memory_domains": [ 00:15:50.211 { 00:15:50.211 "dma_device_id": "system", 00:15:50.211 "dma_device_type": 1 00:15:50.211 }, 00:15:50.211 { 00:15:50.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.211 "dma_device_type": 2 00:15:50.211 } 00:15:50.211 ], 00:15:50.211 "driver_specific": {} 00:15:50.211 } 00:15:50.211 ] 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.211 "name": "Existed_Raid", 00:15:50.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.211 "strip_size_kb": 64, 00:15:50.211 "state": "configuring", 00:15:50.211 "raid_level": "concat", 00:15:50.211 "superblock": false, 00:15:50.211 "num_base_bdevs": 3, 00:15:50.211 "num_base_bdevs_discovered": 1, 00:15:50.211 "num_base_bdevs_operational": 3, 00:15:50.211 "base_bdevs_list": [ 00:15:50.211 { 00:15:50.211 "name": "BaseBdev1", 00:15:50.211 "uuid": "8061ab19-44c6-47e0-8210-995e47942605", 00:15:50.211 "is_configured": true, 00:15:50.211 "data_offset": 0, 00:15:50.211 "data_size": 65536 00:15:50.211 }, 00:15:50.211 { 00:15:50.211 "name": "BaseBdev2", 00:15:50.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.211 "is_configured": false, 00:15:50.211 "data_offset": 0, 00:15:50.211 "data_size": 0 00:15:50.211 }, 00:15:50.211 { 00:15:50.211 "name": "BaseBdev3", 00:15:50.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.211 "is_configured": false, 00:15:50.211 "data_offset": 0, 00:15:50.211 "data_size": 0 00:15:50.211 } 00:15:50.211 ] 00:15:50.211 }' 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.211 14:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.777 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.777 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.778 [2024-11-04 14:48:20.420749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.778 [2024-11-04 14:48:20.420830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.778 [2024-11-04 14:48:20.432778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.778 [2024-11-04 14:48:20.435836] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.778 [2024-11-04 14:48:20.435916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.778 [2024-11-04 14:48:20.435946] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.778 [2024-11-04 14:48:20.435976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.778 "name": "Existed_Raid", 00:15:50.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.778 "strip_size_kb": 64, 00:15:50.778 "state": "configuring", 00:15:50.778 "raid_level": "concat", 00:15:50.778 "superblock": false, 00:15:50.778 "num_base_bdevs": 3, 00:15:50.778 "num_base_bdevs_discovered": 1, 00:15:50.778 "num_base_bdevs_operational": 3, 00:15:50.778 "base_bdevs_list": [ 00:15:50.778 { 00:15:50.778 "name": "BaseBdev1", 00:15:50.778 "uuid": "8061ab19-44c6-47e0-8210-995e47942605", 00:15:50.778 "is_configured": true, 00:15:50.778 "data_offset": 0, 00:15:50.778 "data_size": 65536 00:15:50.778 }, 00:15:50.778 { 00:15:50.778 "name": "BaseBdev2", 00:15:50.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.778 "is_configured": false, 00:15:50.778 "data_offset": 0, 00:15:50.778 "data_size": 0 00:15:50.778 }, 00:15:50.778 { 00:15:50.778 "name": "BaseBdev3", 00:15:50.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.778 "is_configured": false, 00:15:50.778 "data_offset": 0, 00:15:50.778 "data_size": 0 00:15:50.778 } 00:15:50.778 ] 00:15:50.778 }' 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.778 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 14:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.346 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.346 14:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 [2024-11-04 14:48:21.002606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.346 BaseBdev2 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.346 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 [ 00:15:51.346 { 00:15:51.346 "name": "BaseBdev2", 00:15:51.346 "aliases": [ 00:15:51.346 "bd1a9f94-f63e-4526-ae1e-18cef7f5d6c3" 00:15:51.346 ], 00:15:51.346 "product_name": "Malloc disk", 00:15:51.346 "block_size": 512, 00:15:51.346 "num_blocks": 65536, 00:15:51.346 "uuid": "bd1a9f94-f63e-4526-ae1e-18cef7f5d6c3", 00:15:51.346 "assigned_rate_limits": { 00:15:51.346 "rw_ios_per_sec": 0, 00:15:51.346 "rw_mbytes_per_sec": 0, 00:15:51.346 "r_mbytes_per_sec": 0, 00:15:51.346 "w_mbytes_per_sec": 0 00:15:51.346 }, 00:15:51.346 "claimed": true, 00:15:51.346 "claim_type": "exclusive_write", 00:15:51.346 "zoned": false, 00:15:51.346 "supported_io_types": { 00:15:51.346 "read": true, 00:15:51.346 "write": true, 00:15:51.346 "unmap": true, 00:15:51.346 "flush": true, 00:15:51.346 "reset": true, 00:15:51.346 "nvme_admin": false, 00:15:51.346 "nvme_io": false, 00:15:51.346 "nvme_io_md": false, 00:15:51.346 "write_zeroes": true, 00:15:51.346 "zcopy": true, 00:15:51.346 "get_zone_info": false, 00:15:51.346 "zone_management": false, 00:15:51.346 "zone_append": false, 00:15:51.346 "compare": false, 00:15:51.346 "compare_and_write": false, 00:15:51.346 "abort": true, 00:15:51.346 "seek_hole": false, 00:15:51.346 "seek_data": false, 00:15:51.346 "copy": true, 00:15:51.346 "nvme_iov_md": false 00:15:51.346 }, 00:15:51.347 "memory_domains": [ 00:15:51.347 { 00:15:51.347 "dma_device_id": "system", 00:15:51.347 "dma_device_type": 1 00:15:51.347 }, 00:15:51.347 { 00:15:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.347 "dma_device_type": 2 00:15:51.347 } 00:15:51.347 ], 00:15:51.347 "driver_specific": {} 00:15:51.347 } 00:15:51.347 ] 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.347 "name": "Existed_Raid", 00:15:51.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.347 "strip_size_kb": 64, 00:15:51.347 "state": "configuring", 00:15:51.347 "raid_level": "concat", 00:15:51.347 "superblock": false, 00:15:51.347 "num_base_bdevs": 3, 00:15:51.347 "num_base_bdevs_discovered": 2, 00:15:51.347 "num_base_bdevs_operational": 3, 00:15:51.347 "base_bdevs_list": [ 00:15:51.347 { 00:15:51.347 "name": "BaseBdev1", 00:15:51.347 "uuid": "8061ab19-44c6-47e0-8210-995e47942605", 00:15:51.347 "is_configured": true, 00:15:51.347 "data_offset": 0, 00:15:51.347 "data_size": 65536 00:15:51.347 }, 00:15:51.347 { 00:15:51.347 "name": "BaseBdev2", 00:15:51.347 "uuid": "bd1a9f94-f63e-4526-ae1e-18cef7f5d6c3", 00:15:51.347 "is_configured": true, 00:15:51.347 "data_offset": 0, 00:15:51.347 "data_size": 65536 00:15:51.347 }, 00:15:51.347 { 00:15:51.347 "name": "BaseBdev3", 00:15:51.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.347 "is_configured": false, 00:15:51.347 "data_offset": 0, 00:15:51.347 "data_size": 0 00:15:51.347 } 00:15:51.347 ] 00:15:51.347 }' 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.347 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.913 [2024-11-04 14:48:21.617041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.913 [2024-11-04 14:48:21.617112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.913 [2024-11-04 14:48:21.617146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:51.913 [2024-11-04 14:48:21.617570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:51.913 [2024-11-04 14:48:21.617807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.913 [2024-11-04 14:48:21.617855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:51.913 [2024-11-04 14:48:21.618185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.913 BaseBdev3 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.913 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.914 [ 00:15:51.914 { 00:15:51.914 "name": "BaseBdev3", 00:15:51.914 "aliases": [ 00:15:51.914 "23405261-1f5d-424b-8ba1-ab4d55934348" 00:15:51.914 ], 00:15:51.914 "product_name": "Malloc disk", 00:15:51.914 "block_size": 512, 00:15:51.914 "num_blocks": 65536, 00:15:51.914 "uuid": "23405261-1f5d-424b-8ba1-ab4d55934348", 00:15:51.914 "assigned_rate_limits": { 00:15:51.914 "rw_ios_per_sec": 0, 00:15:51.914 "rw_mbytes_per_sec": 0, 00:15:51.914 "r_mbytes_per_sec": 0, 00:15:51.914 "w_mbytes_per_sec": 0 00:15:51.914 }, 00:15:51.914 "claimed": true, 00:15:51.914 "claim_type": "exclusive_write", 00:15:51.914 "zoned": false, 00:15:51.914 "supported_io_types": { 00:15:51.914 "read": true, 00:15:51.914 "write": true, 00:15:51.914 "unmap": true, 00:15:51.914 "flush": true, 00:15:51.914 "reset": true, 00:15:51.914 "nvme_admin": false, 00:15:51.914 "nvme_io": false, 00:15:51.914 "nvme_io_md": false, 00:15:51.914 "write_zeroes": true, 00:15:51.914 "zcopy": true, 00:15:51.914 "get_zone_info": false, 00:15:51.914 "zone_management": false, 00:15:51.914 "zone_append": false, 00:15:51.914 "compare": false, 00:15:51.914 "compare_and_write": false, 00:15:51.914 "abort": true, 00:15:51.914 "seek_hole": false, 00:15:51.914 "seek_data": false, 00:15:51.914 "copy": true, 00:15:51.914 "nvme_iov_md": false 00:15:51.914 }, 00:15:51.914 "memory_domains": [ 00:15:51.914 { 00:15:51.914 "dma_device_id": "system", 00:15:51.914 "dma_device_type": 1 00:15:51.914 }, 00:15:51.914 { 00:15:51.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.914 "dma_device_type": 2 00:15:51.914 } 00:15:51.914 ], 00:15:51.914 "driver_specific": {} 00:15:51.914 } 00:15:51.914 ] 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.914 "name": "Existed_Raid", 00:15:51.914 "uuid": "9b5a11c2-04be-4224-b58c-d54de2f58f06", 00:15:51.914 "strip_size_kb": 64, 00:15:51.914 "state": "online", 00:15:51.914 "raid_level": "concat", 00:15:51.914 "superblock": false, 00:15:51.914 "num_base_bdevs": 3, 00:15:51.914 "num_base_bdevs_discovered": 3, 00:15:51.914 "num_base_bdevs_operational": 3, 00:15:51.914 "base_bdevs_list": [ 00:15:51.914 { 00:15:51.914 "name": "BaseBdev1", 00:15:51.914 "uuid": "8061ab19-44c6-47e0-8210-995e47942605", 00:15:51.914 "is_configured": true, 00:15:51.914 "data_offset": 0, 00:15:51.914 "data_size": 65536 00:15:51.914 }, 00:15:51.914 { 00:15:51.914 "name": "BaseBdev2", 00:15:51.914 "uuid": "bd1a9f94-f63e-4526-ae1e-18cef7f5d6c3", 00:15:51.914 "is_configured": true, 00:15:51.914 "data_offset": 0, 00:15:51.914 "data_size": 65536 00:15:51.914 }, 00:15:51.914 { 00:15:51.914 "name": "BaseBdev3", 00:15:51.914 "uuid": "23405261-1f5d-424b-8ba1-ab4d55934348", 00:15:51.914 "is_configured": true, 00:15:51.914 "data_offset": 0, 00:15:51.914 "data_size": 65536 00:15:51.914 } 00:15:51.914 ] 00:15:51.914 }' 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.914 14:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.479 [2024-11-04 14:48:22.161723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.479 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.479 "name": "Existed_Raid", 00:15:52.479 "aliases": [ 00:15:52.479 "9b5a11c2-04be-4224-b58c-d54de2f58f06" 00:15:52.479 ], 00:15:52.479 "product_name": "Raid Volume", 00:15:52.479 "block_size": 512, 00:15:52.479 "num_blocks": 196608, 00:15:52.479 "uuid": "9b5a11c2-04be-4224-b58c-d54de2f58f06", 00:15:52.479 "assigned_rate_limits": { 00:15:52.479 "rw_ios_per_sec": 0, 00:15:52.479 "rw_mbytes_per_sec": 0, 00:15:52.479 "r_mbytes_per_sec": 0, 00:15:52.479 "w_mbytes_per_sec": 0 00:15:52.479 }, 00:15:52.479 "claimed": false, 00:15:52.479 "zoned": false, 00:15:52.479 "supported_io_types": { 00:15:52.479 "read": true, 00:15:52.479 "write": true, 00:15:52.479 "unmap": true, 00:15:52.479 "flush": true, 00:15:52.479 "reset": true, 00:15:52.479 "nvme_admin": false, 00:15:52.479 "nvme_io": false, 00:15:52.479 "nvme_io_md": false, 00:15:52.479 "write_zeroes": true, 00:15:52.479 "zcopy": false, 00:15:52.479 "get_zone_info": false, 00:15:52.479 "zone_management": false, 00:15:52.479 "zone_append": false, 00:15:52.479 "compare": false, 00:15:52.479 "compare_and_write": false, 00:15:52.479 "abort": false, 00:15:52.479 "seek_hole": false, 00:15:52.479 "seek_data": false, 00:15:52.479 "copy": false, 00:15:52.479 "nvme_iov_md": false 00:15:52.479 }, 00:15:52.479 "memory_domains": [ 00:15:52.479 { 00:15:52.479 "dma_device_id": "system", 00:15:52.479 "dma_device_type": 1 00:15:52.479 }, 00:15:52.479 { 00:15:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.479 "dma_device_type": 2 00:15:52.479 }, 00:15:52.479 { 00:15:52.479 "dma_device_id": "system", 00:15:52.479 "dma_device_type": 1 00:15:52.479 }, 00:15:52.479 { 00:15:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.480 "dma_device_type": 2 00:15:52.480 }, 00:15:52.480 { 00:15:52.480 "dma_device_id": "system", 00:15:52.480 "dma_device_type": 1 00:15:52.480 }, 00:15:52.480 { 00:15:52.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.480 "dma_device_type": 2 00:15:52.480 } 00:15:52.480 ], 00:15:52.480 "driver_specific": { 00:15:52.480 "raid": { 00:15:52.480 "uuid": "9b5a11c2-04be-4224-b58c-d54de2f58f06", 00:15:52.480 "strip_size_kb": 64, 00:15:52.480 "state": "online", 00:15:52.480 "raid_level": "concat", 00:15:52.480 "superblock": false, 00:15:52.480 "num_base_bdevs": 3, 00:15:52.480 "num_base_bdevs_discovered": 3, 00:15:52.480 "num_base_bdevs_operational": 3, 00:15:52.480 "base_bdevs_list": [ 00:15:52.480 { 00:15:52.480 "name": "BaseBdev1", 00:15:52.480 "uuid": "8061ab19-44c6-47e0-8210-995e47942605", 00:15:52.480 "is_configured": true, 00:15:52.480 "data_offset": 0, 00:15:52.480 "data_size": 65536 00:15:52.480 }, 00:15:52.480 { 00:15:52.480 "name": "BaseBdev2", 00:15:52.480 "uuid": "bd1a9f94-f63e-4526-ae1e-18cef7f5d6c3", 00:15:52.480 "is_configured": true, 00:15:52.480 "data_offset": 0, 00:15:52.480 "data_size": 65536 00:15:52.480 }, 00:15:52.480 { 00:15:52.480 "name": "BaseBdev3", 00:15:52.480 "uuid": "23405261-1f5d-424b-8ba1-ab4d55934348", 00:15:52.480 "is_configured": true, 00:15:52.480 "data_offset": 0, 00:15:52.480 "data_size": 65536 00:15:52.480 } 00:15:52.480 ] 00:15:52.480 } 00:15:52.480 } 00:15:52.480 }' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:52.480 BaseBdev2 00:15:52.480 BaseBdev3' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.480 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 [2024-11-04 14:48:22.461414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.746 [2024-11-04 14:48:22.461465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.746 [2024-11-04 14:48:22.461545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.746 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.746 "name": "Existed_Raid", 00:15:52.746 "uuid": "9b5a11c2-04be-4224-b58c-d54de2f58f06", 00:15:52.746 "strip_size_kb": 64, 00:15:52.746 "state": "offline", 00:15:52.746 "raid_level": "concat", 00:15:52.746 "superblock": false, 00:15:52.746 "num_base_bdevs": 3, 00:15:52.746 "num_base_bdevs_discovered": 2, 00:15:52.746 "num_base_bdevs_operational": 2, 00:15:52.746 "base_bdevs_list": [ 00:15:52.746 { 00:15:52.746 "name": null, 00:15:52.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.746 "is_configured": false, 00:15:52.746 "data_offset": 0, 00:15:52.746 "data_size": 65536 00:15:52.746 }, 00:15:52.746 { 00:15:52.746 "name": "BaseBdev2", 00:15:52.746 "uuid": "bd1a9f94-f63e-4526-ae1e-18cef7f5d6c3", 00:15:52.746 "is_configured": true, 00:15:52.746 "data_offset": 0, 00:15:52.746 "data_size": 65536 00:15:52.746 }, 00:15:52.746 { 00:15:52.746 "name": "BaseBdev3", 00:15:52.746 "uuid": "23405261-1f5d-424b-8ba1-ab4d55934348", 00:15:52.746 "is_configured": true, 00:15:52.747 "data_offset": 0, 00:15:52.747 "data_size": 65536 00:15:52.747 } 00:15:52.747 ] 00:15:52.747 }' 00:15:52.747 14:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.747 14:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.329 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 [2024-11-04 14:48:23.148511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 [2024-11-04 14:48:23.299694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.588 [2024-11-04 14:48:23.299786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.588 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.846 BaseBdev2 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.846 [ 00:15:53.846 { 00:15:53.846 "name": "BaseBdev2", 00:15:53.846 "aliases": [ 00:15:53.846 "79abab77-cdfd-4529-8cf6-41f8f274fa44" 00:15:53.846 ], 00:15:53.846 "product_name": "Malloc disk", 00:15:53.846 "block_size": 512, 00:15:53.846 "num_blocks": 65536, 00:15:53.846 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:53.846 "assigned_rate_limits": { 00:15:53.846 "rw_ios_per_sec": 0, 00:15:53.846 "rw_mbytes_per_sec": 0, 00:15:53.846 "r_mbytes_per_sec": 0, 00:15:53.846 "w_mbytes_per_sec": 0 00:15:53.846 }, 00:15:53.846 "claimed": false, 00:15:53.846 "zoned": false, 00:15:53.846 "supported_io_types": { 00:15:53.846 "read": true, 00:15:53.846 "write": true, 00:15:53.846 "unmap": true, 00:15:53.846 "flush": true, 00:15:53.846 "reset": true, 00:15:53.846 "nvme_admin": false, 00:15:53.846 "nvme_io": false, 00:15:53.846 "nvme_io_md": false, 00:15:53.846 "write_zeroes": true, 00:15:53.846 "zcopy": true, 00:15:53.846 "get_zone_info": false, 00:15:53.846 "zone_management": false, 00:15:53.846 "zone_append": false, 00:15:53.846 "compare": false, 00:15:53.846 "compare_and_write": false, 00:15:53.846 "abort": true, 00:15:53.846 "seek_hole": false, 00:15:53.846 "seek_data": false, 00:15:53.846 "copy": true, 00:15:53.846 "nvme_iov_md": false 00:15:53.846 }, 00:15:53.846 "memory_domains": [ 00:15:53.846 { 00:15:53.846 "dma_device_id": "system", 00:15:53.846 "dma_device_type": 1 00:15:53.846 }, 00:15:53.846 { 00:15:53.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.846 "dma_device_type": 2 00:15:53.846 } 00:15:53.846 ], 00:15:53.846 "driver_specific": {} 00:15:53.846 } 00:15:53.846 ] 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.846 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.846 BaseBdev3 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.847 [ 00:15:53.847 { 00:15:53.847 "name": "BaseBdev3", 00:15:53.847 "aliases": [ 00:15:53.847 "f76aece2-0f0a-443f-a0d8-63a05c718740" 00:15:53.847 ], 00:15:53.847 "product_name": "Malloc disk", 00:15:53.847 "block_size": 512, 00:15:53.847 "num_blocks": 65536, 00:15:53.847 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:53.847 "assigned_rate_limits": { 00:15:53.847 "rw_ios_per_sec": 0, 00:15:53.847 "rw_mbytes_per_sec": 0, 00:15:53.847 "r_mbytes_per_sec": 0, 00:15:53.847 "w_mbytes_per_sec": 0 00:15:53.847 }, 00:15:53.847 "claimed": false, 00:15:53.847 "zoned": false, 00:15:53.847 "supported_io_types": { 00:15:53.847 "read": true, 00:15:53.847 "write": true, 00:15:53.847 "unmap": true, 00:15:53.847 "flush": true, 00:15:53.847 "reset": true, 00:15:53.847 "nvme_admin": false, 00:15:53.847 "nvme_io": false, 00:15:53.847 "nvme_io_md": false, 00:15:53.847 "write_zeroes": true, 00:15:53.847 "zcopy": true, 00:15:53.847 "get_zone_info": false, 00:15:53.847 "zone_management": false, 00:15:53.847 "zone_append": false, 00:15:53.847 "compare": false, 00:15:53.847 "compare_and_write": false, 00:15:53.847 "abort": true, 00:15:53.847 "seek_hole": false, 00:15:53.847 "seek_data": false, 00:15:53.847 "copy": true, 00:15:53.847 "nvme_iov_md": false 00:15:53.847 }, 00:15:53.847 "memory_domains": [ 00:15:53.847 { 00:15:53.847 "dma_device_id": "system", 00:15:53.847 "dma_device_type": 1 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.847 "dma_device_type": 2 00:15:53.847 } 00:15:53.847 ], 00:15:53.847 "driver_specific": {} 00:15:53.847 } 00:15:53.847 ] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.847 [2024-11-04 14:48:23.623199] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.847 [2024-11-04 14:48:23.623283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.847 [2024-11-04 14:48:23.623328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.847 [2024-11-04 14:48:23.626039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.847 "name": "Existed_Raid", 00:15:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.847 "strip_size_kb": 64, 00:15:53.847 "state": "configuring", 00:15:53.847 "raid_level": "concat", 00:15:53.847 "superblock": false, 00:15:53.847 "num_base_bdevs": 3, 00:15:53.847 "num_base_bdevs_discovered": 2, 00:15:53.847 "num_base_bdevs_operational": 3, 00:15:53.847 "base_bdevs_list": [ 00:15:53.847 { 00:15:53.847 "name": "BaseBdev1", 00:15:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.847 "is_configured": false, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 0 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "name": "BaseBdev2", 00:15:53.847 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:53.847 "is_configured": true, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "name": "BaseBdev3", 00:15:53.847 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:53.847 "is_configured": true, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 } 00:15:53.847 ] 00:15:53.847 }' 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.847 14:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.413 [2024-11-04 14:48:24.135347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.413 "name": "Existed_Raid", 00:15:54.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.413 "strip_size_kb": 64, 00:15:54.413 "state": "configuring", 00:15:54.413 "raid_level": "concat", 00:15:54.413 "superblock": false, 00:15:54.413 "num_base_bdevs": 3, 00:15:54.413 "num_base_bdevs_discovered": 1, 00:15:54.413 "num_base_bdevs_operational": 3, 00:15:54.413 "base_bdevs_list": [ 00:15:54.413 { 00:15:54.413 "name": "BaseBdev1", 00:15:54.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.413 "is_configured": false, 00:15:54.413 "data_offset": 0, 00:15:54.413 "data_size": 0 00:15:54.413 }, 00:15:54.413 { 00:15:54.413 "name": null, 00:15:54.413 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:54.413 "is_configured": false, 00:15:54.413 "data_offset": 0, 00:15:54.413 "data_size": 65536 00:15:54.413 }, 00:15:54.413 { 00:15:54.413 "name": "BaseBdev3", 00:15:54.413 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:54.413 "is_configured": true, 00:15:54.413 "data_offset": 0, 00:15:54.413 "data_size": 65536 00:15:54.413 } 00:15:54.413 ] 00:15:54.413 }' 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.413 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.978 [2024-11-04 14:48:24.766708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.978 BaseBdev1 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.978 [ 00:15:54.978 { 00:15:54.978 "name": "BaseBdev1", 00:15:54.978 "aliases": [ 00:15:54.978 "f1361707-eedb-4c2d-80cb-4bfcd986393b" 00:15:54.978 ], 00:15:54.978 "product_name": "Malloc disk", 00:15:54.978 "block_size": 512, 00:15:54.978 "num_blocks": 65536, 00:15:54.978 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:54.978 "assigned_rate_limits": { 00:15:54.978 "rw_ios_per_sec": 0, 00:15:54.978 "rw_mbytes_per_sec": 0, 00:15:54.978 "r_mbytes_per_sec": 0, 00:15:54.978 "w_mbytes_per_sec": 0 00:15:54.978 }, 00:15:54.978 "claimed": true, 00:15:54.978 "claim_type": "exclusive_write", 00:15:54.978 "zoned": false, 00:15:54.978 "supported_io_types": { 00:15:54.978 "read": true, 00:15:54.978 "write": true, 00:15:54.978 "unmap": true, 00:15:54.978 "flush": true, 00:15:54.978 "reset": true, 00:15:54.978 "nvme_admin": false, 00:15:54.978 "nvme_io": false, 00:15:54.978 "nvme_io_md": false, 00:15:54.978 "write_zeroes": true, 00:15:54.978 "zcopy": true, 00:15:54.978 "get_zone_info": false, 00:15:54.978 "zone_management": false, 00:15:54.978 "zone_append": false, 00:15:54.978 "compare": false, 00:15:54.978 "compare_and_write": false, 00:15:54.978 "abort": true, 00:15:54.978 "seek_hole": false, 00:15:54.978 "seek_data": false, 00:15:54.978 "copy": true, 00:15:54.978 "nvme_iov_md": false 00:15:54.978 }, 00:15:54.978 "memory_domains": [ 00:15:54.978 { 00:15:54.978 "dma_device_id": "system", 00:15:54.978 "dma_device_type": 1 00:15:54.978 }, 00:15:54.978 { 00:15:54.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.978 "dma_device_type": 2 00:15:54.978 } 00:15:54.978 ], 00:15:54.978 "driver_specific": {} 00:15:54.978 } 00:15:54.978 ] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.978 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.979 "name": "Existed_Raid", 00:15:54.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.979 "strip_size_kb": 64, 00:15:54.979 "state": "configuring", 00:15:54.979 "raid_level": "concat", 00:15:54.979 "superblock": false, 00:15:54.979 "num_base_bdevs": 3, 00:15:54.979 "num_base_bdevs_discovered": 2, 00:15:54.979 "num_base_bdevs_operational": 3, 00:15:54.979 "base_bdevs_list": [ 00:15:54.979 { 00:15:54.979 "name": "BaseBdev1", 00:15:54.979 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:54.979 "is_configured": true, 00:15:54.979 "data_offset": 0, 00:15:54.979 "data_size": 65536 00:15:54.979 }, 00:15:54.979 { 00:15:54.979 "name": null, 00:15:54.979 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:54.979 "is_configured": false, 00:15:54.979 "data_offset": 0, 00:15:54.979 "data_size": 65536 00:15:54.979 }, 00:15:54.979 { 00:15:54.979 "name": "BaseBdev3", 00:15:54.979 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:54.979 "is_configured": true, 00:15:54.979 "data_offset": 0, 00:15:54.979 "data_size": 65536 00:15:54.979 } 00:15:54.979 ] 00:15:54.979 }' 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.979 14:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.543 [2024-11-04 14:48:25.366961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.543 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.544 "name": "Existed_Raid", 00:15:55.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.544 "strip_size_kb": 64, 00:15:55.544 "state": "configuring", 00:15:55.544 "raid_level": "concat", 00:15:55.544 "superblock": false, 00:15:55.544 "num_base_bdevs": 3, 00:15:55.544 "num_base_bdevs_discovered": 1, 00:15:55.544 "num_base_bdevs_operational": 3, 00:15:55.544 "base_bdevs_list": [ 00:15:55.544 { 00:15:55.544 "name": "BaseBdev1", 00:15:55.544 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:55.544 "is_configured": true, 00:15:55.544 "data_offset": 0, 00:15:55.544 "data_size": 65536 00:15:55.544 }, 00:15:55.544 { 00:15:55.544 "name": null, 00:15:55.544 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:55.544 "is_configured": false, 00:15:55.544 "data_offset": 0, 00:15:55.544 "data_size": 65536 00:15:55.544 }, 00:15:55.544 { 00:15:55.544 "name": null, 00:15:55.544 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:55.544 "is_configured": false, 00:15:55.544 "data_offset": 0, 00:15:55.544 "data_size": 65536 00:15:55.544 } 00:15:55.544 ] 00:15:55.544 }' 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.544 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.111 [2024-11-04 14:48:25.899129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.111 "name": "Existed_Raid", 00:15:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.111 "strip_size_kb": 64, 00:15:56.111 "state": "configuring", 00:15:56.111 "raid_level": "concat", 00:15:56.111 "superblock": false, 00:15:56.111 "num_base_bdevs": 3, 00:15:56.111 "num_base_bdevs_discovered": 2, 00:15:56.111 "num_base_bdevs_operational": 3, 00:15:56.111 "base_bdevs_list": [ 00:15:56.111 { 00:15:56.111 "name": "BaseBdev1", 00:15:56.111 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:56.111 "is_configured": true, 00:15:56.111 "data_offset": 0, 00:15:56.111 "data_size": 65536 00:15:56.111 }, 00:15:56.111 { 00:15:56.111 "name": null, 00:15:56.111 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:56.111 "is_configured": false, 00:15:56.111 "data_offset": 0, 00:15:56.111 "data_size": 65536 00:15:56.111 }, 00:15:56.111 { 00:15:56.111 "name": "BaseBdev3", 00:15:56.111 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:56.111 "is_configured": true, 00:15:56.111 "data_offset": 0, 00:15:56.111 "data_size": 65536 00:15:56.111 } 00:15:56.111 ] 00:15:56.111 }' 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.111 14:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.677 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.677 [2024-11-04 14:48:26.443310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.678 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.936 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.936 "name": "Existed_Raid", 00:15:56.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.936 "strip_size_kb": 64, 00:15:56.936 "state": "configuring", 00:15:56.936 "raid_level": "concat", 00:15:56.936 "superblock": false, 00:15:56.936 "num_base_bdevs": 3, 00:15:56.936 "num_base_bdevs_discovered": 1, 00:15:56.936 "num_base_bdevs_operational": 3, 00:15:56.936 "base_bdevs_list": [ 00:15:56.936 { 00:15:56.936 "name": null, 00:15:56.936 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:56.936 "is_configured": false, 00:15:56.936 "data_offset": 0, 00:15:56.936 "data_size": 65536 00:15:56.936 }, 00:15:56.936 { 00:15:56.936 "name": null, 00:15:56.936 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:56.936 "is_configured": false, 00:15:56.936 "data_offset": 0, 00:15:56.936 "data_size": 65536 00:15:56.936 }, 00:15:56.936 { 00:15:56.936 "name": "BaseBdev3", 00:15:56.936 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:56.936 "is_configured": true, 00:15:56.936 "data_offset": 0, 00:15:56.936 "data_size": 65536 00:15:56.936 } 00:15:56.936 ] 00:15:56.936 }' 00:15:56.936 14:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.936 14:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.194 [2024-11-04 14:48:27.066677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.194 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.452 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.452 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.452 "name": "Existed_Raid", 00:15:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.452 "strip_size_kb": 64, 00:15:57.452 "state": "configuring", 00:15:57.452 "raid_level": "concat", 00:15:57.452 "superblock": false, 00:15:57.452 "num_base_bdevs": 3, 00:15:57.452 "num_base_bdevs_discovered": 2, 00:15:57.452 "num_base_bdevs_operational": 3, 00:15:57.452 "base_bdevs_list": [ 00:15:57.452 { 00:15:57.452 "name": null, 00:15:57.452 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:57.452 "is_configured": false, 00:15:57.452 "data_offset": 0, 00:15:57.452 "data_size": 65536 00:15:57.452 }, 00:15:57.452 { 00:15:57.453 "name": "BaseBdev2", 00:15:57.453 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:57.453 "is_configured": true, 00:15:57.453 "data_offset": 0, 00:15:57.453 "data_size": 65536 00:15:57.453 }, 00:15:57.453 { 00:15:57.453 "name": "BaseBdev3", 00:15:57.453 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:57.453 "is_configured": true, 00:15:57.453 "data_offset": 0, 00:15:57.453 "data_size": 65536 00:15:57.453 } 00:15:57.453 ] 00:15:57.453 }' 00:15:57.453 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.453 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.711 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f1361707-eedb-4c2d-80cb-4bfcd986393b 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.969 [2024-11-04 14:48:27.672744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:57.969 [2024-11-04 14:48:27.672836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:57.969 [2024-11-04 14:48:27.672854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:57.969 [2024-11-04 14:48:27.673215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:57.969 [2024-11-04 14:48:27.673483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:57.969 [2024-11-04 14:48:27.673500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:57.969 [2024-11-04 14:48:27.673859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.969 NewBaseBdev 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.969 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.970 [ 00:15:57.970 { 00:15:57.970 "name": "NewBaseBdev", 00:15:57.970 "aliases": [ 00:15:57.970 "f1361707-eedb-4c2d-80cb-4bfcd986393b" 00:15:57.970 ], 00:15:57.970 "product_name": "Malloc disk", 00:15:57.970 "block_size": 512, 00:15:57.970 "num_blocks": 65536, 00:15:57.970 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:57.970 "assigned_rate_limits": { 00:15:57.970 "rw_ios_per_sec": 0, 00:15:57.970 "rw_mbytes_per_sec": 0, 00:15:57.970 "r_mbytes_per_sec": 0, 00:15:57.970 "w_mbytes_per_sec": 0 00:15:57.970 }, 00:15:57.970 "claimed": true, 00:15:57.970 "claim_type": "exclusive_write", 00:15:57.970 "zoned": false, 00:15:57.970 "supported_io_types": { 00:15:57.970 "read": true, 00:15:57.970 "write": true, 00:15:57.970 "unmap": true, 00:15:57.970 "flush": true, 00:15:57.970 "reset": true, 00:15:57.970 "nvme_admin": false, 00:15:57.970 "nvme_io": false, 00:15:57.970 "nvme_io_md": false, 00:15:57.970 "write_zeroes": true, 00:15:57.970 "zcopy": true, 00:15:57.970 "get_zone_info": false, 00:15:57.970 "zone_management": false, 00:15:57.970 "zone_append": false, 00:15:57.970 "compare": false, 00:15:57.970 "compare_and_write": false, 00:15:57.970 "abort": true, 00:15:57.970 "seek_hole": false, 00:15:57.970 "seek_data": false, 00:15:57.970 "copy": true, 00:15:57.970 "nvme_iov_md": false 00:15:57.970 }, 00:15:57.970 "memory_domains": [ 00:15:57.970 { 00:15:57.970 "dma_device_id": "system", 00:15:57.970 "dma_device_type": 1 00:15:57.970 }, 00:15:57.970 { 00:15:57.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.970 "dma_device_type": 2 00:15:57.970 } 00:15:57.970 ], 00:15:57.970 "driver_specific": {} 00:15:57.970 } 00:15:57.970 ] 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.970 "name": "Existed_Raid", 00:15:57.970 "uuid": "079abd24-d593-4104-82f6-7045d26780ef", 00:15:57.970 "strip_size_kb": 64, 00:15:57.970 "state": "online", 00:15:57.970 "raid_level": "concat", 00:15:57.970 "superblock": false, 00:15:57.970 "num_base_bdevs": 3, 00:15:57.970 "num_base_bdevs_discovered": 3, 00:15:57.970 "num_base_bdevs_operational": 3, 00:15:57.970 "base_bdevs_list": [ 00:15:57.970 { 00:15:57.970 "name": "NewBaseBdev", 00:15:57.970 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:57.970 "is_configured": true, 00:15:57.970 "data_offset": 0, 00:15:57.970 "data_size": 65536 00:15:57.970 }, 00:15:57.970 { 00:15:57.970 "name": "BaseBdev2", 00:15:57.970 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:57.970 "is_configured": true, 00:15:57.970 "data_offset": 0, 00:15:57.970 "data_size": 65536 00:15:57.970 }, 00:15:57.970 { 00:15:57.970 "name": "BaseBdev3", 00:15:57.970 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:57.970 "is_configured": true, 00:15:57.970 "data_offset": 0, 00:15:57.970 "data_size": 65536 00:15:57.970 } 00:15:57.970 ] 00:15:57.970 }' 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.970 14:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.538 [2024-11-04 14:48:28.193360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.538 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.538 "name": "Existed_Raid", 00:15:58.538 "aliases": [ 00:15:58.538 "079abd24-d593-4104-82f6-7045d26780ef" 00:15:58.538 ], 00:15:58.538 "product_name": "Raid Volume", 00:15:58.538 "block_size": 512, 00:15:58.538 "num_blocks": 196608, 00:15:58.538 "uuid": "079abd24-d593-4104-82f6-7045d26780ef", 00:15:58.538 "assigned_rate_limits": { 00:15:58.538 "rw_ios_per_sec": 0, 00:15:58.538 "rw_mbytes_per_sec": 0, 00:15:58.538 "r_mbytes_per_sec": 0, 00:15:58.538 "w_mbytes_per_sec": 0 00:15:58.538 }, 00:15:58.538 "claimed": false, 00:15:58.538 "zoned": false, 00:15:58.538 "supported_io_types": { 00:15:58.538 "read": true, 00:15:58.538 "write": true, 00:15:58.538 "unmap": true, 00:15:58.538 "flush": true, 00:15:58.539 "reset": true, 00:15:58.539 "nvme_admin": false, 00:15:58.539 "nvme_io": false, 00:15:58.539 "nvme_io_md": false, 00:15:58.539 "write_zeroes": true, 00:15:58.539 "zcopy": false, 00:15:58.539 "get_zone_info": false, 00:15:58.539 "zone_management": false, 00:15:58.539 "zone_append": false, 00:15:58.539 "compare": false, 00:15:58.539 "compare_and_write": false, 00:15:58.539 "abort": false, 00:15:58.539 "seek_hole": false, 00:15:58.539 "seek_data": false, 00:15:58.539 "copy": false, 00:15:58.539 "nvme_iov_md": false 00:15:58.539 }, 00:15:58.539 "memory_domains": [ 00:15:58.539 { 00:15:58.539 "dma_device_id": "system", 00:15:58.539 "dma_device_type": 1 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.539 "dma_device_type": 2 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "dma_device_id": "system", 00:15:58.539 "dma_device_type": 1 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.539 "dma_device_type": 2 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "dma_device_id": "system", 00:15:58.539 "dma_device_type": 1 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.539 "dma_device_type": 2 00:15:58.539 } 00:15:58.539 ], 00:15:58.539 "driver_specific": { 00:15:58.539 "raid": { 00:15:58.539 "uuid": "079abd24-d593-4104-82f6-7045d26780ef", 00:15:58.539 "strip_size_kb": 64, 00:15:58.539 "state": "online", 00:15:58.539 "raid_level": "concat", 00:15:58.539 "superblock": false, 00:15:58.539 "num_base_bdevs": 3, 00:15:58.539 "num_base_bdevs_discovered": 3, 00:15:58.539 "num_base_bdevs_operational": 3, 00:15:58.539 "base_bdevs_list": [ 00:15:58.539 { 00:15:58.539 "name": "NewBaseBdev", 00:15:58.539 "uuid": "f1361707-eedb-4c2d-80cb-4bfcd986393b", 00:15:58.539 "is_configured": true, 00:15:58.539 "data_offset": 0, 00:15:58.539 "data_size": 65536 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "name": "BaseBdev2", 00:15:58.539 "uuid": "79abab77-cdfd-4529-8cf6-41f8f274fa44", 00:15:58.539 "is_configured": true, 00:15:58.539 "data_offset": 0, 00:15:58.539 "data_size": 65536 00:15:58.539 }, 00:15:58.539 { 00:15:58.539 "name": "BaseBdev3", 00:15:58.539 "uuid": "f76aece2-0f0a-443f-a0d8-63a05c718740", 00:15:58.539 "is_configured": true, 00:15:58.539 "data_offset": 0, 00:15:58.539 "data_size": 65536 00:15:58.539 } 00:15:58.539 ] 00:15:58.539 } 00:15:58.539 } 00:15:58.539 }' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:58.539 BaseBdev2 00:15:58.539 BaseBdev3' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.798 [2024-11-04 14:48:28.501045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.798 [2024-11-04 14:48:28.501221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.798 [2024-11-04 14:48:28.501391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.798 [2024-11-04 14:48:28.501498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.798 [2024-11-04 14:48:28.501522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65735 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65735 ']' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65735 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65735 00:15:58.798 killing process with pid 65735 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65735' 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65735 00:15:58.798 [2024-11-04 14:48:28.542050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.798 14:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65735 00:15:59.057 [2024-11-04 14:48:28.840751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:00.430 00:16:00.430 real 0m11.803s 00:16:00.430 user 0m19.258s 00:16:00.430 sys 0m1.803s 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.430 ************************************ 00:16:00.430 END TEST raid_state_function_test 00:16:00.430 ************************************ 00:16:00.430 14:48:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:00.430 14:48:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:00.430 14:48:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:00.430 14:48:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.430 ************************************ 00:16:00.430 START TEST raid_state_function_test_sb 00:16:00.430 ************************************ 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:00.430 14:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:00.430 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:00.430 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:00.430 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.431 Process raid pid: 66372 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66372 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66372' 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66372 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66372 ']' 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:00.431 14:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.431 [2024-11-04 14:48:30.153734] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:00.431 [2024-11-04 14:48:30.154194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.689 [2024-11-04 14:48:30.336564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.689 [2024-11-04 14:48:30.484054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.946 [2024-11-04 14:48:30.714335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.946 [2024-11-04 14:48:30.714647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.205 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:01.205 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:01.205 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:01.205 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.205 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.205 [2024-11-04 14:48:31.090674] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.205 [2024-11-04 14:48:31.090754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.205 [2024-11-04 14:48:31.090782] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.205 [2024-11-04 14:48:31.090811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.205 [2024-11-04 14:48:31.090829] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.205 [2024-11-04 14:48:31.090856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.463 "name": "Existed_Raid", 00:16:01.463 "uuid": "3ae17a3c-3783-4907-b493-5ffcaa2e491e", 00:16:01.463 "strip_size_kb": 64, 00:16:01.463 "state": "configuring", 00:16:01.463 "raid_level": "concat", 00:16:01.463 "superblock": true, 00:16:01.463 "num_base_bdevs": 3, 00:16:01.463 "num_base_bdevs_discovered": 0, 00:16:01.463 "num_base_bdevs_operational": 3, 00:16:01.463 "base_bdevs_list": [ 00:16:01.463 { 00:16:01.463 "name": "BaseBdev1", 00:16:01.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.463 "is_configured": false, 00:16:01.463 "data_offset": 0, 00:16:01.463 "data_size": 0 00:16:01.463 }, 00:16:01.463 { 00:16:01.463 "name": "BaseBdev2", 00:16:01.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.463 "is_configured": false, 00:16:01.463 "data_offset": 0, 00:16:01.463 "data_size": 0 00:16:01.463 }, 00:16:01.463 { 00:16:01.463 "name": "BaseBdev3", 00:16:01.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.463 "is_configured": false, 00:16:01.463 "data_offset": 0, 00:16:01.463 "data_size": 0 00:16:01.463 } 00:16:01.463 ] 00:16:01.463 }' 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.463 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.722 [2024-11-04 14:48:31.606743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.722 [2024-11-04 14:48:31.606796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.722 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.981 [2024-11-04 14:48:31.614772] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.981 [2024-11-04 14:48:31.614842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.981 [2024-11-04 14:48:31.614858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.981 [2024-11-04 14:48:31.614874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.981 [2024-11-04 14:48:31.614884] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.981 [2024-11-04 14:48:31.614899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.981 [2024-11-04 14:48:31.664595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.981 BaseBdev1 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.981 [ 00:16:01.981 { 00:16:01.981 "name": "BaseBdev1", 00:16:01.981 "aliases": [ 00:16:01.981 "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59" 00:16:01.981 ], 00:16:01.981 "product_name": "Malloc disk", 00:16:01.981 "block_size": 512, 00:16:01.981 "num_blocks": 65536, 00:16:01.981 "uuid": "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59", 00:16:01.981 "assigned_rate_limits": { 00:16:01.981 "rw_ios_per_sec": 0, 00:16:01.981 "rw_mbytes_per_sec": 0, 00:16:01.981 "r_mbytes_per_sec": 0, 00:16:01.981 "w_mbytes_per_sec": 0 00:16:01.981 }, 00:16:01.981 "claimed": true, 00:16:01.981 "claim_type": "exclusive_write", 00:16:01.981 "zoned": false, 00:16:01.981 "supported_io_types": { 00:16:01.981 "read": true, 00:16:01.981 "write": true, 00:16:01.981 "unmap": true, 00:16:01.981 "flush": true, 00:16:01.981 "reset": true, 00:16:01.981 "nvme_admin": false, 00:16:01.981 "nvme_io": false, 00:16:01.981 "nvme_io_md": false, 00:16:01.981 "write_zeroes": true, 00:16:01.981 "zcopy": true, 00:16:01.981 "get_zone_info": false, 00:16:01.981 "zone_management": false, 00:16:01.981 "zone_append": false, 00:16:01.981 "compare": false, 00:16:01.981 "compare_and_write": false, 00:16:01.981 "abort": true, 00:16:01.981 "seek_hole": false, 00:16:01.981 "seek_data": false, 00:16:01.981 "copy": true, 00:16:01.981 "nvme_iov_md": false 00:16:01.981 }, 00:16:01.981 "memory_domains": [ 00:16:01.981 { 00:16:01.981 "dma_device_id": "system", 00:16:01.981 "dma_device_type": 1 00:16:01.981 }, 00:16:01.981 { 00:16:01.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.981 "dma_device_type": 2 00:16:01.981 } 00:16:01.981 ], 00:16:01.981 "driver_specific": {} 00:16:01.981 } 00:16:01.981 ] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.981 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.981 "name": "Existed_Raid", 00:16:01.981 "uuid": "190256c1-8cac-4c9d-824c-00486d365df5", 00:16:01.981 "strip_size_kb": 64, 00:16:01.981 "state": "configuring", 00:16:01.981 "raid_level": "concat", 00:16:01.981 "superblock": true, 00:16:01.981 "num_base_bdevs": 3, 00:16:01.981 "num_base_bdevs_discovered": 1, 00:16:01.981 "num_base_bdevs_operational": 3, 00:16:01.981 "base_bdevs_list": [ 00:16:01.981 { 00:16:01.981 "name": "BaseBdev1", 00:16:01.981 "uuid": "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59", 00:16:01.981 "is_configured": true, 00:16:01.981 "data_offset": 2048, 00:16:01.981 "data_size": 63488 00:16:01.981 }, 00:16:01.981 { 00:16:01.981 "name": "BaseBdev2", 00:16:01.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.981 "is_configured": false, 00:16:01.981 "data_offset": 0, 00:16:01.981 "data_size": 0 00:16:01.981 }, 00:16:01.981 { 00:16:01.981 "name": "BaseBdev3", 00:16:01.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.981 "is_configured": false, 00:16:01.981 "data_offset": 0, 00:16:01.981 "data_size": 0 00:16:01.981 } 00:16:01.982 ] 00:16:01.982 }' 00:16:01.982 14:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.982 14:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.549 [2024-11-04 14:48:32.224833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.549 [2024-11-04 14:48:32.225045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.549 [2024-11-04 14:48:32.236960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.549 [2024-11-04 14:48:32.239880] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.549 [2024-11-04 14:48:32.240066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.549 [2024-11-04 14:48:32.240185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.549 [2024-11-04 14:48:32.240338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.549 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.550 "name": "Existed_Raid", 00:16:02.550 "uuid": "8510b6f6-e18e-42a6-92dd-8a57e465dc2a", 00:16:02.550 "strip_size_kb": 64, 00:16:02.550 "state": "configuring", 00:16:02.550 "raid_level": "concat", 00:16:02.550 "superblock": true, 00:16:02.550 "num_base_bdevs": 3, 00:16:02.550 "num_base_bdevs_discovered": 1, 00:16:02.550 "num_base_bdevs_operational": 3, 00:16:02.550 "base_bdevs_list": [ 00:16:02.550 { 00:16:02.550 "name": "BaseBdev1", 00:16:02.550 "uuid": "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59", 00:16:02.550 "is_configured": true, 00:16:02.550 "data_offset": 2048, 00:16:02.550 "data_size": 63488 00:16:02.550 }, 00:16:02.550 { 00:16:02.550 "name": "BaseBdev2", 00:16:02.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.550 "is_configured": false, 00:16:02.550 "data_offset": 0, 00:16:02.550 "data_size": 0 00:16:02.550 }, 00:16:02.550 { 00:16:02.550 "name": "BaseBdev3", 00:16:02.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.550 "is_configured": false, 00:16:02.550 "data_offset": 0, 00:16:02.550 "data_size": 0 00:16:02.550 } 00:16:02.550 ] 00:16:02.550 }' 00:16:02.550 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.550 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 [2024-11-04 14:48:32.795836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.117 BaseBdev2 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 [ 00:16:03.117 { 00:16:03.117 "name": "BaseBdev2", 00:16:03.117 "aliases": [ 00:16:03.117 "7c52c0bd-0bcb-484e-a53c-2501b34fd3a2" 00:16:03.117 ], 00:16:03.117 "product_name": "Malloc disk", 00:16:03.117 "block_size": 512, 00:16:03.117 "num_blocks": 65536, 00:16:03.117 "uuid": "7c52c0bd-0bcb-484e-a53c-2501b34fd3a2", 00:16:03.117 "assigned_rate_limits": { 00:16:03.117 "rw_ios_per_sec": 0, 00:16:03.117 "rw_mbytes_per_sec": 0, 00:16:03.117 "r_mbytes_per_sec": 0, 00:16:03.117 "w_mbytes_per_sec": 0 00:16:03.117 }, 00:16:03.117 "claimed": true, 00:16:03.117 "claim_type": "exclusive_write", 00:16:03.117 "zoned": false, 00:16:03.117 "supported_io_types": { 00:16:03.117 "read": true, 00:16:03.117 "write": true, 00:16:03.117 "unmap": true, 00:16:03.117 "flush": true, 00:16:03.117 "reset": true, 00:16:03.117 "nvme_admin": false, 00:16:03.117 "nvme_io": false, 00:16:03.117 "nvme_io_md": false, 00:16:03.117 "write_zeroes": true, 00:16:03.117 "zcopy": true, 00:16:03.117 "get_zone_info": false, 00:16:03.117 "zone_management": false, 00:16:03.117 "zone_append": false, 00:16:03.117 "compare": false, 00:16:03.117 "compare_and_write": false, 00:16:03.117 "abort": true, 00:16:03.117 "seek_hole": false, 00:16:03.117 "seek_data": false, 00:16:03.117 "copy": true, 00:16:03.117 "nvme_iov_md": false 00:16:03.117 }, 00:16:03.117 "memory_domains": [ 00:16:03.117 { 00:16:03.117 "dma_device_id": "system", 00:16:03.117 "dma_device_type": 1 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.117 "dma_device_type": 2 00:16:03.117 } 00:16:03.117 ], 00:16:03.117 "driver_specific": {} 00:16:03.117 } 00:16:03.117 ] 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.117 "name": "Existed_Raid", 00:16:03.117 "uuid": "8510b6f6-e18e-42a6-92dd-8a57e465dc2a", 00:16:03.117 "strip_size_kb": 64, 00:16:03.117 "state": "configuring", 00:16:03.117 "raid_level": "concat", 00:16:03.117 "superblock": true, 00:16:03.117 "num_base_bdevs": 3, 00:16:03.117 "num_base_bdevs_discovered": 2, 00:16:03.117 "num_base_bdevs_operational": 3, 00:16:03.117 "base_bdevs_list": [ 00:16:03.117 { 00:16:03.117 "name": "BaseBdev1", 00:16:03.117 "uuid": "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 2048, 00:16:03.117 "data_size": 63488 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev2", 00:16:03.117 "uuid": "7c52c0bd-0bcb-484e-a53c-2501b34fd3a2", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 2048, 00:16:03.117 "data_size": 63488 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev3", 00:16:03.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.117 "is_configured": false, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 0 00:16:03.117 } 00:16:03.117 ] 00:16:03.117 }' 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.117 14:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.688 [2024-11-04 14:48:33.403200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.688 [2024-11-04 14:48:33.403980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:03.688 [2024-11-04 14:48:33.404021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:03.688 BaseBdev3 00:16:03.688 [2024-11-04 14:48:33.404623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:03.688 [2024-11-04 14:48:33.404840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:03.688 [2024-11-04 14:48:33.404859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:03.688 [2024-11-04 14:48:33.405049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.688 [ 00:16:03.688 { 00:16:03.688 "name": "BaseBdev3", 00:16:03.688 "aliases": [ 00:16:03.688 "88a2e1f8-750b-4b6e-bc85-0a5629dd7859" 00:16:03.688 ], 00:16:03.688 "product_name": "Malloc disk", 00:16:03.688 "block_size": 512, 00:16:03.688 "num_blocks": 65536, 00:16:03.688 "uuid": "88a2e1f8-750b-4b6e-bc85-0a5629dd7859", 00:16:03.688 "assigned_rate_limits": { 00:16:03.688 "rw_ios_per_sec": 0, 00:16:03.688 "rw_mbytes_per_sec": 0, 00:16:03.688 "r_mbytes_per_sec": 0, 00:16:03.688 "w_mbytes_per_sec": 0 00:16:03.688 }, 00:16:03.688 "claimed": true, 00:16:03.688 "claim_type": "exclusive_write", 00:16:03.688 "zoned": false, 00:16:03.688 "supported_io_types": { 00:16:03.688 "read": true, 00:16:03.688 "write": true, 00:16:03.688 "unmap": true, 00:16:03.688 "flush": true, 00:16:03.688 "reset": true, 00:16:03.688 "nvme_admin": false, 00:16:03.688 "nvme_io": false, 00:16:03.688 "nvme_io_md": false, 00:16:03.688 "write_zeroes": true, 00:16:03.688 "zcopy": true, 00:16:03.688 "get_zone_info": false, 00:16:03.688 "zone_management": false, 00:16:03.688 "zone_append": false, 00:16:03.688 "compare": false, 00:16:03.688 "compare_and_write": false, 00:16:03.688 "abort": true, 00:16:03.688 "seek_hole": false, 00:16:03.688 "seek_data": false, 00:16:03.688 "copy": true, 00:16:03.688 "nvme_iov_md": false 00:16:03.688 }, 00:16:03.688 "memory_domains": [ 00:16:03.688 { 00:16:03.688 "dma_device_id": "system", 00:16:03.688 "dma_device_type": 1 00:16:03.688 }, 00:16:03.688 { 00:16:03.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.688 "dma_device_type": 2 00:16:03.688 } 00:16:03.688 ], 00:16:03.688 "driver_specific": {} 00:16:03.688 } 00:16:03.688 ] 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.688 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.688 "name": "Existed_Raid", 00:16:03.688 "uuid": "8510b6f6-e18e-42a6-92dd-8a57e465dc2a", 00:16:03.688 "strip_size_kb": 64, 00:16:03.688 "state": "online", 00:16:03.688 "raid_level": "concat", 00:16:03.688 "superblock": true, 00:16:03.688 "num_base_bdevs": 3, 00:16:03.688 "num_base_bdevs_discovered": 3, 00:16:03.688 "num_base_bdevs_operational": 3, 00:16:03.688 "base_bdevs_list": [ 00:16:03.688 { 00:16:03.688 "name": "BaseBdev1", 00:16:03.688 "uuid": "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59", 00:16:03.688 "is_configured": true, 00:16:03.688 "data_offset": 2048, 00:16:03.688 "data_size": 63488 00:16:03.688 }, 00:16:03.688 { 00:16:03.688 "name": "BaseBdev2", 00:16:03.689 "uuid": "7c52c0bd-0bcb-484e-a53c-2501b34fd3a2", 00:16:03.689 "is_configured": true, 00:16:03.689 "data_offset": 2048, 00:16:03.689 "data_size": 63488 00:16:03.689 }, 00:16:03.689 { 00:16:03.689 "name": "BaseBdev3", 00:16:03.689 "uuid": "88a2e1f8-750b-4b6e-bc85-0a5629dd7859", 00:16:03.689 "is_configured": true, 00:16:03.689 "data_offset": 2048, 00:16:03.689 "data_size": 63488 00:16:03.689 } 00:16:03.689 ] 00:16:03.689 }' 00:16:03.689 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.689 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.255 [2024-11-04 14:48:33.963860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.255 14:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.255 "name": "Existed_Raid", 00:16:04.255 "aliases": [ 00:16:04.255 "8510b6f6-e18e-42a6-92dd-8a57e465dc2a" 00:16:04.255 ], 00:16:04.255 "product_name": "Raid Volume", 00:16:04.255 "block_size": 512, 00:16:04.255 "num_blocks": 190464, 00:16:04.255 "uuid": "8510b6f6-e18e-42a6-92dd-8a57e465dc2a", 00:16:04.255 "assigned_rate_limits": { 00:16:04.255 "rw_ios_per_sec": 0, 00:16:04.255 "rw_mbytes_per_sec": 0, 00:16:04.255 "r_mbytes_per_sec": 0, 00:16:04.255 "w_mbytes_per_sec": 0 00:16:04.255 }, 00:16:04.255 "claimed": false, 00:16:04.255 "zoned": false, 00:16:04.255 "supported_io_types": { 00:16:04.255 "read": true, 00:16:04.255 "write": true, 00:16:04.255 "unmap": true, 00:16:04.255 "flush": true, 00:16:04.255 "reset": true, 00:16:04.255 "nvme_admin": false, 00:16:04.255 "nvme_io": false, 00:16:04.255 "nvme_io_md": false, 00:16:04.255 "write_zeroes": true, 00:16:04.255 "zcopy": false, 00:16:04.255 "get_zone_info": false, 00:16:04.255 "zone_management": false, 00:16:04.255 "zone_append": false, 00:16:04.255 "compare": false, 00:16:04.255 "compare_and_write": false, 00:16:04.255 "abort": false, 00:16:04.255 "seek_hole": false, 00:16:04.255 "seek_data": false, 00:16:04.255 "copy": false, 00:16:04.255 "nvme_iov_md": false 00:16:04.255 }, 00:16:04.255 "memory_domains": [ 00:16:04.255 { 00:16:04.255 "dma_device_id": "system", 00:16:04.255 "dma_device_type": 1 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.255 "dma_device_type": 2 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "dma_device_id": "system", 00:16:04.255 "dma_device_type": 1 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.255 "dma_device_type": 2 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "dma_device_id": "system", 00:16:04.255 "dma_device_type": 1 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.255 "dma_device_type": 2 00:16:04.255 } 00:16:04.255 ], 00:16:04.255 "driver_specific": { 00:16:04.255 "raid": { 00:16:04.255 "uuid": "8510b6f6-e18e-42a6-92dd-8a57e465dc2a", 00:16:04.255 "strip_size_kb": 64, 00:16:04.255 "state": "online", 00:16:04.255 "raid_level": "concat", 00:16:04.255 "superblock": true, 00:16:04.255 "num_base_bdevs": 3, 00:16:04.255 "num_base_bdevs_discovered": 3, 00:16:04.255 "num_base_bdevs_operational": 3, 00:16:04.255 "base_bdevs_list": [ 00:16:04.255 { 00:16:04.255 "name": "BaseBdev1", 00:16:04.255 "uuid": "6fc7265c-fa69-4b4b-b3ca-4fc5caec3e59", 00:16:04.255 "is_configured": true, 00:16:04.255 "data_offset": 2048, 00:16:04.255 "data_size": 63488 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "name": "BaseBdev2", 00:16:04.255 "uuid": "7c52c0bd-0bcb-484e-a53c-2501b34fd3a2", 00:16:04.255 "is_configured": true, 00:16:04.255 "data_offset": 2048, 00:16:04.255 "data_size": 63488 00:16:04.255 }, 00:16:04.255 { 00:16:04.255 "name": "BaseBdev3", 00:16:04.255 "uuid": "88a2e1f8-750b-4b6e-bc85-0a5629dd7859", 00:16:04.255 "is_configured": true, 00:16:04.255 "data_offset": 2048, 00:16:04.255 "data_size": 63488 00:16:04.255 } 00:16:04.255 ] 00:16:04.255 } 00:16:04.255 } 00:16:04.255 }' 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:04.255 BaseBdev2 00:16:04.255 BaseBdev3' 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.255 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.514 [2024-11-04 14:48:34.287615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.514 [2024-11-04 14:48:34.287656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.514 [2024-11-04 14:48:34.287738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.514 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.773 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.773 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.773 "name": "Existed_Raid", 00:16:04.773 "uuid": "8510b6f6-e18e-42a6-92dd-8a57e465dc2a", 00:16:04.773 "strip_size_kb": 64, 00:16:04.773 "state": "offline", 00:16:04.773 "raid_level": "concat", 00:16:04.773 "superblock": true, 00:16:04.773 "num_base_bdevs": 3, 00:16:04.773 "num_base_bdevs_discovered": 2, 00:16:04.773 "num_base_bdevs_operational": 2, 00:16:04.773 "base_bdevs_list": [ 00:16:04.773 { 00:16:04.773 "name": null, 00:16:04.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.773 "is_configured": false, 00:16:04.773 "data_offset": 0, 00:16:04.773 "data_size": 63488 00:16:04.773 }, 00:16:04.773 { 00:16:04.773 "name": "BaseBdev2", 00:16:04.773 "uuid": "7c52c0bd-0bcb-484e-a53c-2501b34fd3a2", 00:16:04.773 "is_configured": true, 00:16:04.773 "data_offset": 2048, 00:16:04.773 "data_size": 63488 00:16:04.773 }, 00:16:04.773 { 00:16:04.773 "name": "BaseBdev3", 00:16:04.773 "uuid": "88a2e1f8-750b-4b6e-bc85-0a5629dd7859", 00:16:04.773 "is_configured": true, 00:16:04.773 "data_offset": 2048, 00:16:04.773 "data_size": 63488 00:16:04.773 } 00:16:04.773 ] 00:16:04.773 }' 00:16:04.773 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.773 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.030 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:05.030 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.030 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.030 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.030 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.288 14:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.288 [2024-11-04 14:48:34.978648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.289 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.289 [2024-11-04 14:48:35.136762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:05.289 [2024-11-04 14:48:35.136842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 BaseBdev2 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 [ 00:16:05.547 { 00:16:05.547 "name": "BaseBdev2", 00:16:05.547 "aliases": [ 00:16:05.547 "5998f734-8a85-4ef2-895a-b2b9ec2ec158" 00:16:05.547 ], 00:16:05.547 "product_name": "Malloc disk", 00:16:05.547 "block_size": 512, 00:16:05.547 "num_blocks": 65536, 00:16:05.547 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:05.547 "assigned_rate_limits": { 00:16:05.547 "rw_ios_per_sec": 0, 00:16:05.547 "rw_mbytes_per_sec": 0, 00:16:05.547 "r_mbytes_per_sec": 0, 00:16:05.547 "w_mbytes_per_sec": 0 00:16:05.547 }, 00:16:05.547 "claimed": false, 00:16:05.547 "zoned": false, 00:16:05.547 "supported_io_types": { 00:16:05.547 "read": true, 00:16:05.547 "write": true, 00:16:05.547 "unmap": true, 00:16:05.547 "flush": true, 00:16:05.547 "reset": true, 00:16:05.547 "nvme_admin": false, 00:16:05.547 "nvme_io": false, 00:16:05.547 "nvme_io_md": false, 00:16:05.547 "write_zeroes": true, 00:16:05.547 "zcopy": true, 00:16:05.547 "get_zone_info": false, 00:16:05.547 "zone_management": false, 00:16:05.547 "zone_append": false, 00:16:05.547 "compare": false, 00:16:05.547 "compare_and_write": false, 00:16:05.547 "abort": true, 00:16:05.547 "seek_hole": false, 00:16:05.547 "seek_data": false, 00:16:05.547 "copy": true, 00:16:05.547 "nvme_iov_md": false 00:16:05.547 }, 00:16:05.547 "memory_domains": [ 00:16:05.547 { 00:16:05.547 "dma_device_id": "system", 00:16:05.547 "dma_device_type": 1 00:16:05.547 }, 00:16:05.547 { 00:16:05.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.547 "dma_device_type": 2 00:16:05.547 } 00:16:05.547 ], 00:16:05.547 "driver_specific": {} 00:16:05.547 } 00:16:05.547 ] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 BaseBdev3 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.547 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.547 [ 00:16:05.547 { 00:16:05.547 "name": "BaseBdev3", 00:16:05.547 "aliases": [ 00:16:05.547 "06b23efc-aa58-4060-95b2-134ae4d7b440" 00:16:05.547 ], 00:16:05.547 "product_name": "Malloc disk", 00:16:05.547 "block_size": 512, 00:16:05.547 "num_blocks": 65536, 00:16:05.547 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:05.547 "assigned_rate_limits": { 00:16:05.547 "rw_ios_per_sec": 0, 00:16:05.547 "rw_mbytes_per_sec": 0, 00:16:05.547 "r_mbytes_per_sec": 0, 00:16:05.547 "w_mbytes_per_sec": 0 00:16:05.547 }, 00:16:05.547 "claimed": false, 00:16:05.547 "zoned": false, 00:16:05.548 "supported_io_types": { 00:16:05.548 "read": true, 00:16:05.548 "write": true, 00:16:05.548 "unmap": true, 00:16:05.548 "flush": true, 00:16:05.548 "reset": true, 00:16:05.548 "nvme_admin": false, 00:16:05.548 "nvme_io": false, 00:16:05.548 "nvme_io_md": false, 00:16:05.806 "write_zeroes": true, 00:16:05.806 "zcopy": true, 00:16:05.806 "get_zone_info": false, 00:16:05.806 "zone_management": false, 00:16:05.806 "zone_append": false, 00:16:05.806 "compare": false, 00:16:05.806 "compare_and_write": false, 00:16:05.806 "abort": true, 00:16:05.806 "seek_hole": false, 00:16:05.806 "seek_data": false, 00:16:05.806 "copy": true, 00:16:05.806 "nvme_iov_md": false 00:16:05.806 }, 00:16:05.806 "memory_domains": [ 00:16:05.806 { 00:16:05.806 "dma_device_id": "system", 00:16:05.806 "dma_device_type": 1 00:16:05.806 }, 00:16:05.806 { 00:16:05.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.806 "dma_device_type": 2 00:16:05.806 } 00:16:05.806 ], 00:16:05.806 "driver_specific": {} 00:16:05.806 } 00:16:05.806 ] 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.806 [2024-11-04 14:48:35.452030] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.806 [2024-11-04 14:48:35.452094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.806 [2024-11-04 14:48:35.452135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.806 [2024-11-04 14:48:35.454804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.806 "name": "Existed_Raid", 00:16:05.806 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:05.806 "strip_size_kb": 64, 00:16:05.806 "state": "configuring", 00:16:05.806 "raid_level": "concat", 00:16:05.806 "superblock": true, 00:16:05.806 "num_base_bdevs": 3, 00:16:05.806 "num_base_bdevs_discovered": 2, 00:16:05.806 "num_base_bdevs_operational": 3, 00:16:05.806 "base_bdevs_list": [ 00:16:05.806 { 00:16:05.806 "name": "BaseBdev1", 00:16:05.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.806 "is_configured": false, 00:16:05.806 "data_offset": 0, 00:16:05.806 "data_size": 0 00:16:05.806 }, 00:16:05.806 { 00:16:05.806 "name": "BaseBdev2", 00:16:05.806 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:05.806 "is_configured": true, 00:16:05.806 "data_offset": 2048, 00:16:05.806 "data_size": 63488 00:16:05.806 }, 00:16:05.806 { 00:16:05.806 "name": "BaseBdev3", 00:16:05.806 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:05.806 "is_configured": true, 00:16:05.806 "data_offset": 2048, 00:16:05.806 "data_size": 63488 00:16:05.806 } 00:16:05.806 ] 00:16:05.806 }' 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.806 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.391 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:06.391 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.391 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.391 [2024-11-04 14:48:35.992132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:06.391 14:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.391 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.392 14:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.392 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.392 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.392 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.392 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.392 "name": "Existed_Raid", 00:16:06.392 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:06.392 "strip_size_kb": 64, 00:16:06.392 "state": "configuring", 00:16:06.392 "raid_level": "concat", 00:16:06.392 "superblock": true, 00:16:06.392 "num_base_bdevs": 3, 00:16:06.392 "num_base_bdevs_discovered": 1, 00:16:06.392 "num_base_bdevs_operational": 3, 00:16:06.392 "base_bdevs_list": [ 00:16:06.392 { 00:16:06.392 "name": "BaseBdev1", 00:16:06.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.392 "is_configured": false, 00:16:06.392 "data_offset": 0, 00:16:06.392 "data_size": 0 00:16:06.392 }, 00:16:06.392 { 00:16:06.392 "name": null, 00:16:06.392 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:06.392 "is_configured": false, 00:16:06.392 "data_offset": 0, 00:16:06.392 "data_size": 63488 00:16:06.392 }, 00:16:06.392 { 00:16:06.392 "name": "BaseBdev3", 00:16:06.392 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:06.392 "is_configured": true, 00:16:06.392 "data_offset": 2048, 00:16:06.392 "data_size": 63488 00:16:06.392 } 00:16:06.392 ] 00:16:06.392 }' 00:16:06.392 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.392 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.651 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.651 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:06.651 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.651 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.651 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.909 [2024-11-04 14:48:36.606381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.909 BaseBdev1 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.909 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.909 [ 00:16:06.909 { 00:16:06.909 "name": "BaseBdev1", 00:16:06.909 "aliases": [ 00:16:06.909 "58025f0f-676d-41b7-a5ad-d79be7b21325" 00:16:06.909 ], 00:16:06.909 "product_name": "Malloc disk", 00:16:06.909 "block_size": 512, 00:16:06.909 "num_blocks": 65536, 00:16:06.909 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:06.909 "assigned_rate_limits": { 00:16:06.909 "rw_ios_per_sec": 0, 00:16:06.909 "rw_mbytes_per_sec": 0, 00:16:06.909 "r_mbytes_per_sec": 0, 00:16:06.909 "w_mbytes_per_sec": 0 00:16:06.909 }, 00:16:06.909 "claimed": true, 00:16:06.909 "claim_type": "exclusive_write", 00:16:06.909 "zoned": false, 00:16:06.909 "supported_io_types": { 00:16:06.909 "read": true, 00:16:06.909 "write": true, 00:16:06.909 "unmap": true, 00:16:06.909 "flush": true, 00:16:06.909 "reset": true, 00:16:06.909 "nvme_admin": false, 00:16:06.909 "nvme_io": false, 00:16:06.909 "nvme_io_md": false, 00:16:06.909 "write_zeroes": true, 00:16:06.909 "zcopy": true, 00:16:06.909 "get_zone_info": false, 00:16:06.909 "zone_management": false, 00:16:06.909 "zone_append": false, 00:16:06.909 "compare": false, 00:16:06.910 "compare_and_write": false, 00:16:06.910 "abort": true, 00:16:06.910 "seek_hole": false, 00:16:06.910 "seek_data": false, 00:16:06.910 "copy": true, 00:16:06.910 "nvme_iov_md": false 00:16:06.910 }, 00:16:06.910 "memory_domains": [ 00:16:06.910 { 00:16:06.910 "dma_device_id": "system", 00:16:06.910 "dma_device_type": 1 00:16:06.910 }, 00:16:06.910 { 00:16:06.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.910 "dma_device_type": 2 00:16:06.910 } 00:16:06.910 ], 00:16:06.910 "driver_specific": {} 00:16:06.910 } 00:16:06.910 ] 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.910 "name": "Existed_Raid", 00:16:06.910 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:06.910 "strip_size_kb": 64, 00:16:06.910 "state": "configuring", 00:16:06.910 "raid_level": "concat", 00:16:06.910 "superblock": true, 00:16:06.910 "num_base_bdevs": 3, 00:16:06.910 "num_base_bdevs_discovered": 2, 00:16:06.910 "num_base_bdevs_operational": 3, 00:16:06.910 "base_bdevs_list": [ 00:16:06.910 { 00:16:06.910 "name": "BaseBdev1", 00:16:06.910 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:06.910 "is_configured": true, 00:16:06.910 "data_offset": 2048, 00:16:06.910 "data_size": 63488 00:16:06.910 }, 00:16:06.910 { 00:16:06.910 "name": null, 00:16:06.910 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:06.910 "is_configured": false, 00:16:06.910 "data_offset": 0, 00:16:06.910 "data_size": 63488 00:16:06.910 }, 00:16:06.910 { 00:16:06.910 "name": "BaseBdev3", 00:16:06.910 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:06.910 "is_configured": true, 00:16:06.910 "data_offset": 2048, 00:16:06.910 "data_size": 63488 00:16:06.910 } 00:16:06.910 ] 00:16:06.910 }' 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.910 14:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 [2024-11-04 14:48:37.234640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.477 "name": "Existed_Raid", 00:16:07.477 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:07.477 "strip_size_kb": 64, 00:16:07.477 "state": "configuring", 00:16:07.477 "raid_level": "concat", 00:16:07.477 "superblock": true, 00:16:07.477 "num_base_bdevs": 3, 00:16:07.477 "num_base_bdevs_discovered": 1, 00:16:07.477 "num_base_bdevs_operational": 3, 00:16:07.477 "base_bdevs_list": [ 00:16:07.477 { 00:16:07.477 "name": "BaseBdev1", 00:16:07.477 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:07.477 "is_configured": true, 00:16:07.477 "data_offset": 2048, 00:16:07.477 "data_size": 63488 00:16:07.477 }, 00:16:07.477 { 00:16:07.477 "name": null, 00:16:07.477 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:07.477 "is_configured": false, 00:16:07.477 "data_offset": 0, 00:16:07.477 "data_size": 63488 00:16:07.477 }, 00:16:07.477 { 00:16:07.477 "name": null, 00:16:07.477 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:07.477 "is_configured": false, 00:16:07.477 "data_offset": 0, 00:16:07.477 "data_size": 63488 00:16:07.477 } 00:16:07.477 ] 00:16:07.477 }' 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.477 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 [2024-11-04 14:48:37.818837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.044 "name": "Existed_Raid", 00:16:08.044 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:08.044 "strip_size_kb": 64, 00:16:08.044 "state": "configuring", 00:16:08.044 "raid_level": "concat", 00:16:08.044 "superblock": true, 00:16:08.044 "num_base_bdevs": 3, 00:16:08.044 "num_base_bdevs_discovered": 2, 00:16:08.044 "num_base_bdevs_operational": 3, 00:16:08.044 "base_bdevs_list": [ 00:16:08.044 { 00:16:08.044 "name": "BaseBdev1", 00:16:08.044 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:08.044 "is_configured": true, 00:16:08.044 "data_offset": 2048, 00:16:08.044 "data_size": 63488 00:16:08.044 }, 00:16:08.044 { 00:16:08.044 "name": null, 00:16:08.044 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:08.044 "is_configured": false, 00:16:08.044 "data_offset": 0, 00:16:08.044 "data_size": 63488 00:16:08.044 }, 00:16:08.044 { 00:16:08.044 "name": "BaseBdev3", 00:16:08.044 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:08.044 "is_configured": true, 00:16:08.044 "data_offset": 2048, 00:16:08.044 "data_size": 63488 00:16:08.044 } 00:16:08.044 ] 00:16:08.044 }' 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.044 14:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.610 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 [2024-11-04 14:48:38.419026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.870 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.871 "name": "Existed_Raid", 00:16:08.871 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:08.871 "strip_size_kb": 64, 00:16:08.871 "state": "configuring", 00:16:08.871 "raid_level": "concat", 00:16:08.871 "superblock": true, 00:16:08.871 "num_base_bdevs": 3, 00:16:08.871 "num_base_bdevs_discovered": 1, 00:16:08.871 "num_base_bdevs_operational": 3, 00:16:08.871 "base_bdevs_list": [ 00:16:08.871 { 00:16:08.871 "name": null, 00:16:08.871 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:08.871 "is_configured": false, 00:16:08.871 "data_offset": 0, 00:16:08.871 "data_size": 63488 00:16:08.871 }, 00:16:08.871 { 00:16:08.871 "name": null, 00:16:08.871 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:08.871 "is_configured": false, 00:16:08.871 "data_offset": 0, 00:16:08.871 "data_size": 63488 00:16:08.871 }, 00:16:08.871 { 00:16:08.871 "name": "BaseBdev3", 00:16:08.871 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:08.871 "is_configured": true, 00:16:08.871 "data_offset": 2048, 00:16:08.871 "data_size": 63488 00:16:08.871 } 00:16:08.871 ] 00:16:08.871 }' 00:16:08.871 14:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.871 14:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.147 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.147 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.147 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.147 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.405 [2024-11-04 14:48:39.086489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.405 "name": "Existed_Raid", 00:16:09.405 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:09.405 "strip_size_kb": 64, 00:16:09.405 "state": "configuring", 00:16:09.405 "raid_level": "concat", 00:16:09.405 "superblock": true, 00:16:09.405 "num_base_bdevs": 3, 00:16:09.405 "num_base_bdevs_discovered": 2, 00:16:09.405 "num_base_bdevs_operational": 3, 00:16:09.405 "base_bdevs_list": [ 00:16:09.405 { 00:16:09.405 "name": null, 00:16:09.405 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:09.405 "is_configured": false, 00:16:09.405 "data_offset": 0, 00:16:09.405 "data_size": 63488 00:16:09.405 }, 00:16:09.405 { 00:16:09.405 "name": "BaseBdev2", 00:16:09.405 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:09.405 "is_configured": true, 00:16:09.405 "data_offset": 2048, 00:16:09.405 "data_size": 63488 00:16:09.405 }, 00:16:09.405 { 00:16:09.405 "name": "BaseBdev3", 00:16:09.405 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:09.405 "is_configured": true, 00:16:09.405 "data_offset": 2048, 00:16:09.405 "data_size": 63488 00:16:09.405 } 00:16:09.405 ] 00:16:09.405 }' 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.405 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 58025f0f-676d-41b7-a5ad-d79be7b21325 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 [2024-11-04 14:48:39.772144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:09.972 NewBaseBdev 00:16:09.972 [2024-11-04 14:48:39.772773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:09.972 [2024-11-04 14:48:39.772807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.972 [2024-11-04 14:48:39.773158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:09.972 [2024-11-04 14:48:39.773371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:09.972 [2024-11-04 14:48:39.773389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:09.972 [2024-11-04 14:48:39.773582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 [ 00:16:09.972 { 00:16:09.972 "name": "NewBaseBdev", 00:16:09.972 "aliases": [ 00:16:09.972 "58025f0f-676d-41b7-a5ad-d79be7b21325" 00:16:09.972 ], 00:16:09.972 "product_name": "Malloc disk", 00:16:09.972 "block_size": 512, 00:16:09.972 "num_blocks": 65536, 00:16:09.972 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:09.972 "assigned_rate_limits": { 00:16:09.972 "rw_ios_per_sec": 0, 00:16:09.972 "rw_mbytes_per_sec": 0, 00:16:09.972 "r_mbytes_per_sec": 0, 00:16:09.972 "w_mbytes_per_sec": 0 00:16:09.972 }, 00:16:09.972 "claimed": true, 00:16:09.972 "claim_type": "exclusive_write", 00:16:09.972 "zoned": false, 00:16:09.972 "supported_io_types": { 00:16:09.972 "read": true, 00:16:09.972 "write": true, 00:16:09.972 "unmap": true, 00:16:09.972 "flush": true, 00:16:09.972 "reset": true, 00:16:09.972 "nvme_admin": false, 00:16:09.972 "nvme_io": false, 00:16:09.972 "nvme_io_md": false, 00:16:09.972 "write_zeroes": true, 00:16:09.972 "zcopy": true, 00:16:09.972 "get_zone_info": false, 00:16:09.972 "zone_management": false, 00:16:09.972 "zone_append": false, 00:16:09.972 "compare": false, 00:16:09.972 "compare_and_write": false, 00:16:09.972 "abort": true, 00:16:09.972 "seek_hole": false, 00:16:09.972 "seek_data": false, 00:16:09.972 "copy": true, 00:16:09.972 "nvme_iov_md": false 00:16:09.972 }, 00:16:09.972 "memory_domains": [ 00:16:09.972 { 00:16:09.972 "dma_device_id": "system", 00:16:09.972 "dma_device_type": 1 00:16:09.972 }, 00:16:09.972 { 00:16:09.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.972 "dma_device_type": 2 00:16:09.972 } 00:16:09.972 ], 00:16:09.972 "driver_specific": {} 00:16:09.972 } 00:16:09.972 ] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.972 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.231 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.231 "name": "Existed_Raid", 00:16:10.231 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:10.231 "strip_size_kb": 64, 00:16:10.231 "state": "online", 00:16:10.231 "raid_level": "concat", 00:16:10.231 "superblock": true, 00:16:10.231 "num_base_bdevs": 3, 00:16:10.231 "num_base_bdevs_discovered": 3, 00:16:10.231 "num_base_bdevs_operational": 3, 00:16:10.231 "base_bdevs_list": [ 00:16:10.231 { 00:16:10.231 "name": "NewBaseBdev", 00:16:10.231 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:10.231 "is_configured": true, 00:16:10.231 "data_offset": 2048, 00:16:10.231 "data_size": 63488 00:16:10.231 }, 00:16:10.231 { 00:16:10.231 "name": "BaseBdev2", 00:16:10.231 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:10.231 "is_configured": true, 00:16:10.231 "data_offset": 2048, 00:16:10.231 "data_size": 63488 00:16:10.231 }, 00:16:10.231 { 00:16:10.231 "name": "BaseBdev3", 00:16:10.231 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:10.231 "is_configured": true, 00:16:10.231 "data_offset": 2048, 00:16:10.231 "data_size": 63488 00:16:10.231 } 00:16:10.231 ] 00:16:10.231 }' 00:16:10.231 14:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.231 14:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.489 [2024-11-04 14:48:40.340808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.489 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.489 "name": "Existed_Raid", 00:16:10.489 "aliases": [ 00:16:10.489 "464545d2-d65d-4f11-863f-ec105462e38d" 00:16:10.489 ], 00:16:10.489 "product_name": "Raid Volume", 00:16:10.489 "block_size": 512, 00:16:10.489 "num_blocks": 190464, 00:16:10.489 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:10.489 "assigned_rate_limits": { 00:16:10.489 "rw_ios_per_sec": 0, 00:16:10.489 "rw_mbytes_per_sec": 0, 00:16:10.489 "r_mbytes_per_sec": 0, 00:16:10.489 "w_mbytes_per_sec": 0 00:16:10.489 }, 00:16:10.489 "claimed": false, 00:16:10.489 "zoned": false, 00:16:10.489 "supported_io_types": { 00:16:10.489 "read": true, 00:16:10.489 "write": true, 00:16:10.489 "unmap": true, 00:16:10.489 "flush": true, 00:16:10.489 "reset": true, 00:16:10.489 "nvme_admin": false, 00:16:10.489 "nvme_io": false, 00:16:10.489 "nvme_io_md": false, 00:16:10.489 "write_zeroes": true, 00:16:10.489 "zcopy": false, 00:16:10.489 "get_zone_info": false, 00:16:10.489 "zone_management": false, 00:16:10.489 "zone_append": false, 00:16:10.489 "compare": false, 00:16:10.489 "compare_and_write": false, 00:16:10.489 "abort": false, 00:16:10.489 "seek_hole": false, 00:16:10.489 "seek_data": false, 00:16:10.489 "copy": false, 00:16:10.489 "nvme_iov_md": false 00:16:10.489 }, 00:16:10.489 "memory_domains": [ 00:16:10.489 { 00:16:10.489 "dma_device_id": "system", 00:16:10.489 "dma_device_type": 1 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.489 "dma_device_type": 2 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "dma_device_id": "system", 00:16:10.489 "dma_device_type": 1 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.489 "dma_device_type": 2 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "dma_device_id": "system", 00:16:10.489 "dma_device_type": 1 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.489 "dma_device_type": 2 00:16:10.489 } 00:16:10.489 ], 00:16:10.489 "driver_specific": { 00:16:10.489 "raid": { 00:16:10.489 "uuid": "464545d2-d65d-4f11-863f-ec105462e38d", 00:16:10.489 "strip_size_kb": 64, 00:16:10.489 "state": "online", 00:16:10.489 "raid_level": "concat", 00:16:10.489 "superblock": true, 00:16:10.489 "num_base_bdevs": 3, 00:16:10.489 "num_base_bdevs_discovered": 3, 00:16:10.489 "num_base_bdevs_operational": 3, 00:16:10.489 "base_bdevs_list": [ 00:16:10.489 { 00:16:10.489 "name": "NewBaseBdev", 00:16:10.489 "uuid": "58025f0f-676d-41b7-a5ad-d79be7b21325", 00:16:10.489 "is_configured": true, 00:16:10.489 "data_offset": 2048, 00:16:10.489 "data_size": 63488 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "name": "BaseBdev2", 00:16:10.489 "uuid": "5998f734-8a85-4ef2-895a-b2b9ec2ec158", 00:16:10.489 "is_configured": true, 00:16:10.489 "data_offset": 2048, 00:16:10.489 "data_size": 63488 00:16:10.489 }, 00:16:10.489 { 00:16:10.489 "name": "BaseBdev3", 00:16:10.489 "uuid": "06b23efc-aa58-4060-95b2-134ae4d7b440", 00:16:10.489 "is_configured": true, 00:16:10.489 "data_offset": 2048, 00:16:10.489 "data_size": 63488 00:16:10.489 } 00:16:10.489 ] 00:16:10.489 } 00:16:10.489 } 00:16:10.489 }' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:10.747 BaseBdev2 00:16:10.747 BaseBdev3' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.747 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.005 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.006 [2024-11-04 14:48:40.648486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.006 [2024-11-04 14:48:40.648530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.006 [2024-11-04 14:48:40.648679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.006 [2024-11-04 14:48:40.648767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.006 [2024-11-04 14:48:40.648799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66372 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66372 ']' 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66372 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66372 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.006 killing process with pid 66372 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66372' 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66372 00:16:11.006 [2024-11-04 14:48:40.687684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.006 14:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66372 00:16:11.264 [2024-11-04 14:48:40.989556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.636 ************************************ 00:16:12.636 END TEST raid_state_function_test_sb 00:16:12.636 ************************************ 00:16:12.636 14:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:12.636 00:16:12.636 real 0m12.128s 00:16:12.636 user 0m19.900s 00:16:12.636 sys 0m1.758s 00:16:12.636 14:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.636 14:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.636 14:48:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:12.636 14:48:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:12.636 14:48:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.636 14:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.637 ************************************ 00:16:12.637 START TEST raid_superblock_test 00:16:12.637 ************************************ 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:12.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67010 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67010 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67010 ']' 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:12.637 14:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.637 [2024-11-04 14:48:42.300594] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:12.637 [2024-11-04 14:48:42.300784] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67010 ] 00:16:12.637 [2024-11-04 14:48:42.485243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.895 [2024-11-04 14:48:42.628717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.152 [2024-11-04 14:48:42.857186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.152 [2024-11-04 14:48:42.857294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.410 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.668 malloc1 00:16:13.668 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.668 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.668 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.668 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.668 [2024-11-04 14:48:43.333901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.668 [2024-11-04 14:48:43.334127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.668 [2024-11-04 14:48:43.334209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:13.668 [2024-11-04 14:48:43.334443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.668 [2024-11-04 14:48:43.337576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.669 [2024-11-04 14:48:43.337737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.669 pt1 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 malloc2 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 [2024-11-04 14:48:43.393749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:13.669 [2024-11-04 14:48:43.393827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.669 [2024-11-04 14:48:43.393864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:13.669 [2024-11-04 14:48:43.393879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.669 [2024-11-04 14:48:43.396794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.669 [2024-11-04 14:48:43.396838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:13.669 pt2 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 malloc3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 [2024-11-04 14:48:43.464775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:13.669 [2024-11-04 14:48:43.464853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.669 [2024-11-04 14:48:43.464890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:13.669 [2024-11-04 14:48:43.464906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.669 [2024-11-04 14:48:43.467902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.669 [2024-11-04 14:48:43.468070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:13.669 pt3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 [2024-11-04 14:48:43.476966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.669 [2024-11-04 14:48:43.479567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.669 [2024-11-04 14:48:43.479663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:13.669 [2024-11-04 14:48:43.479886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:13.669 [2024-11-04 14:48:43.479909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:13.669 [2024-11-04 14:48:43.480265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:13.669 [2024-11-04 14:48:43.480494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:13.669 [2024-11-04 14:48:43.480511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:13.669 [2024-11-04 14:48:43.480704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.669 "name": "raid_bdev1", 00:16:13.669 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:13.669 "strip_size_kb": 64, 00:16:13.669 "state": "online", 00:16:13.669 "raid_level": "concat", 00:16:13.669 "superblock": true, 00:16:13.669 "num_base_bdevs": 3, 00:16:13.669 "num_base_bdevs_discovered": 3, 00:16:13.669 "num_base_bdevs_operational": 3, 00:16:13.669 "base_bdevs_list": [ 00:16:13.669 { 00:16:13.669 "name": "pt1", 00:16:13.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.669 "is_configured": true, 00:16:13.669 "data_offset": 2048, 00:16:13.669 "data_size": 63488 00:16:13.669 }, 00:16:13.669 { 00:16:13.669 "name": "pt2", 00:16:13.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.669 "is_configured": true, 00:16:13.669 "data_offset": 2048, 00:16:13.669 "data_size": 63488 00:16:13.669 }, 00:16:13.669 { 00:16:13.669 "name": "pt3", 00:16:13.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:13.669 "is_configured": true, 00:16:13.669 "data_offset": 2048, 00:16:13.669 "data_size": 63488 00:16:13.669 } 00:16:13.669 ] 00:16:13.669 }' 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.669 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.235 14:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.235 [2024-11-04 14:48:43.989498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.235 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.235 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.235 "name": "raid_bdev1", 00:16:14.235 "aliases": [ 00:16:14.235 "517d03e4-9682-428c-b1dc-7256eaf98616" 00:16:14.235 ], 00:16:14.235 "product_name": "Raid Volume", 00:16:14.235 "block_size": 512, 00:16:14.235 "num_blocks": 190464, 00:16:14.235 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:14.235 "assigned_rate_limits": { 00:16:14.235 "rw_ios_per_sec": 0, 00:16:14.235 "rw_mbytes_per_sec": 0, 00:16:14.235 "r_mbytes_per_sec": 0, 00:16:14.235 "w_mbytes_per_sec": 0 00:16:14.235 }, 00:16:14.235 "claimed": false, 00:16:14.235 "zoned": false, 00:16:14.235 "supported_io_types": { 00:16:14.235 "read": true, 00:16:14.235 "write": true, 00:16:14.235 "unmap": true, 00:16:14.235 "flush": true, 00:16:14.235 "reset": true, 00:16:14.235 "nvme_admin": false, 00:16:14.235 "nvme_io": false, 00:16:14.235 "nvme_io_md": false, 00:16:14.235 "write_zeroes": true, 00:16:14.235 "zcopy": false, 00:16:14.235 "get_zone_info": false, 00:16:14.235 "zone_management": false, 00:16:14.235 "zone_append": false, 00:16:14.235 "compare": false, 00:16:14.235 "compare_and_write": false, 00:16:14.235 "abort": false, 00:16:14.235 "seek_hole": false, 00:16:14.235 "seek_data": false, 00:16:14.235 "copy": false, 00:16:14.235 "nvme_iov_md": false 00:16:14.235 }, 00:16:14.235 "memory_domains": [ 00:16:14.235 { 00:16:14.235 "dma_device_id": "system", 00:16:14.235 "dma_device_type": 1 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.235 "dma_device_type": 2 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "dma_device_id": "system", 00:16:14.235 "dma_device_type": 1 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.235 "dma_device_type": 2 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "dma_device_id": "system", 00:16:14.235 "dma_device_type": 1 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.235 "dma_device_type": 2 00:16:14.235 } 00:16:14.235 ], 00:16:14.235 "driver_specific": { 00:16:14.235 "raid": { 00:16:14.235 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:14.235 "strip_size_kb": 64, 00:16:14.235 "state": "online", 00:16:14.235 "raid_level": "concat", 00:16:14.235 "superblock": true, 00:16:14.235 "num_base_bdevs": 3, 00:16:14.235 "num_base_bdevs_discovered": 3, 00:16:14.235 "num_base_bdevs_operational": 3, 00:16:14.235 "base_bdevs_list": [ 00:16:14.235 { 00:16:14.235 "name": "pt1", 00:16:14.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.235 "is_configured": true, 00:16:14.235 "data_offset": 2048, 00:16:14.235 "data_size": 63488 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "name": "pt2", 00:16:14.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.235 "is_configured": true, 00:16:14.235 "data_offset": 2048, 00:16:14.235 "data_size": 63488 00:16:14.235 }, 00:16:14.235 { 00:16:14.235 "name": "pt3", 00:16:14.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.235 "is_configured": true, 00:16:14.235 "data_offset": 2048, 00:16:14.235 "data_size": 63488 00:16:14.236 } 00:16:14.236 ] 00:16:14.236 } 00:16:14.236 } 00:16:14.236 }' 00:16:14.236 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.236 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:14.236 pt2 00:16:14.236 pt3' 00:16:14.236 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.495 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:14.496 [2024-11-04 14:48:44.313528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=517d03e4-9682-428c-b1dc-7256eaf98616 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 517d03e4-9682-428c-b1dc-7256eaf98616 ']' 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.496 [2024-11-04 14:48:44.361162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.496 [2024-11-04 14:48:44.361344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.496 [2024-11-04 14:48:44.361514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.496 [2024-11-04 14:48:44.361611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.496 [2024-11-04 14:48:44.361627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:14.496 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.754 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 [2024-11-04 14:48:44.505311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.755 [2024-11-04 14:48:44.507989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.755 [2024-11-04 14:48:44.508057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:14.755 [2024-11-04 14:48:44.508141] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.755 [2024-11-04 14:48:44.508245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.755 [2024-11-04 14:48:44.508289] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:14.755 [2024-11-04 14:48:44.508318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.755 [2024-11-04 14:48:44.508333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:14.755 request: 00:16:14.755 { 00:16:14.755 "name": "raid_bdev1", 00:16:14.755 "raid_level": "concat", 00:16:14.755 "base_bdevs": [ 00:16:14.755 "malloc1", 00:16:14.755 "malloc2", 00:16:14.755 "malloc3" 00:16:14.755 ], 00:16:14.755 "strip_size_kb": 64, 00:16:14.755 "superblock": false, 00:16:14.755 "method": "bdev_raid_create", 00:16:14.755 "req_id": 1 00:16:14.755 } 00:16:14.755 Got JSON-RPC error response 00:16:14.755 response: 00:16:14.755 { 00:16:14.755 "code": -17, 00:16:14.755 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.755 } 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 [2024-11-04 14:48:44.577306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:14.755 [2024-11-04 14:48:44.577539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.755 [2024-11-04 14:48:44.577618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:14.755 [2024-11-04 14:48:44.577834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.755 [2024-11-04 14:48:44.580998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.755 [2024-11-04 14:48:44.581153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:14.755 [2024-11-04 14:48:44.581396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:14.755 [2024-11-04 14:48:44.581611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.755 pt1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.755 "name": "raid_bdev1", 00:16:14.755 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:14.755 "strip_size_kb": 64, 00:16:14.755 "state": "configuring", 00:16:14.755 "raid_level": "concat", 00:16:14.755 "superblock": true, 00:16:14.755 "num_base_bdevs": 3, 00:16:14.755 "num_base_bdevs_discovered": 1, 00:16:14.755 "num_base_bdevs_operational": 3, 00:16:14.755 "base_bdevs_list": [ 00:16:14.755 { 00:16:14.755 "name": "pt1", 00:16:14.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.755 "is_configured": true, 00:16:14.755 "data_offset": 2048, 00:16:14.755 "data_size": 63488 00:16:14.755 }, 00:16:14.755 { 00:16:14.755 "name": null, 00:16:14.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.755 "is_configured": false, 00:16:14.755 "data_offset": 2048, 00:16:14.755 "data_size": 63488 00:16:14.755 }, 00:16:14.755 { 00:16:14.755 "name": null, 00:16:14.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.755 "is_configured": false, 00:16:14.755 "data_offset": 2048, 00:16:14.755 "data_size": 63488 00:16:14.755 } 00:16:14.755 ] 00:16:14.755 }' 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.755 14:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.321 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:15.321 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.321 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.321 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.321 [2024-11-04 14:48:45.093679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.322 [2024-11-04 14:48:45.093769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.322 [2024-11-04 14:48:45.093808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:15.322 [2024-11-04 14:48:45.093824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.322 [2024-11-04 14:48:45.094464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.322 [2024-11-04 14:48:45.094496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.322 [2024-11-04 14:48:45.094620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.322 [2024-11-04 14:48:45.094659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.322 pt2 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.322 [2024-11-04 14:48:45.101632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.322 "name": "raid_bdev1", 00:16:15.322 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:15.322 "strip_size_kb": 64, 00:16:15.322 "state": "configuring", 00:16:15.322 "raid_level": "concat", 00:16:15.322 "superblock": true, 00:16:15.322 "num_base_bdevs": 3, 00:16:15.322 "num_base_bdevs_discovered": 1, 00:16:15.322 "num_base_bdevs_operational": 3, 00:16:15.322 "base_bdevs_list": [ 00:16:15.322 { 00:16:15.322 "name": "pt1", 00:16:15.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.322 "is_configured": true, 00:16:15.322 "data_offset": 2048, 00:16:15.322 "data_size": 63488 00:16:15.322 }, 00:16:15.322 { 00:16:15.322 "name": null, 00:16:15.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.322 "is_configured": false, 00:16:15.322 "data_offset": 0, 00:16:15.322 "data_size": 63488 00:16:15.322 }, 00:16:15.322 { 00:16:15.322 "name": null, 00:16:15.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.322 "is_configured": false, 00:16:15.322 "data_offset": 2048, 00:16:15.322 "data_size": 63488 00:16:15.322 } 00:16:15.322 ] 00:16:15.322 }' 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.322 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.888 [2024-11-04 14:48:45.593778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.888 [2024-11-04 14:48:45.594012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.888 [2024-11-04 14:48:45.594054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:15.888 [2024-11-04 14:48:45.594073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.888 [2024-11-04 14:48:45.594758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.888 [2024-11-04 14:48:45.594789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.888 [2024-11-04 14:48:45.594921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.888 [2024-11-04 14:48:45.594962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.888 pt2 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.888 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.888 [2024-11-04 14:48:45.605722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:15.888 [2024-11-04 14:48:45.605785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.888 [2024-11-04 14:48:45.605807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:15.888 [2024-11-04 14:48:45.605823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.888 [2024-11-04 14:48:45.606324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.888 [2024-11-04 14:48:45.606364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:15.888 [2024-11-04 14:48:45.606441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:15.888 [2024-11-04 14:48:45.606473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:15.889 [2024-11-04 14:48:45.606621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:15.889 [2024-11-04 14:48:45.606656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.889 [2024-11-04 14:48:45.606984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:15.889 [2024-11-04 14:48:45.607169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:15.889 [2024-11-04 14:48:45.607183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:15.889 [2024-11-04 14:48:45.607366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.889 pt3 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.889 "name": "raid_bdev1", 00:16:15.889 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:15.889 "strip_size_kb": 64, 00:16:15.889 "state": "online", 00:16:15.889 "raid_level": "concat", 00:16:15.889 "superblock": true, 00:16:15.889 "num_base_bdevs": 3, 00:16:15.889 "num_base_bdevs_discovered": 3, 00:16:15.889 "num_base_bdevs_operational": 3, 00:16:15.889 "base_bdevs_list": [ 00:16:15.889 { 00:16:15.889 "name": "pt1", 00:16:15.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.889 "is_configured": true, 00:16:15.889 "data_offset": 2048, 00:16:15.889 "data_size": 63488 00:16:15.889 }, 00:16:15.889 { 00:16:15.889 "name": "pt2", 00:16:15.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.889 "is_configured": true, 00:16:15.889 "data_offset": 2048, 00:16:15.889 "data_size": 63488 00:16:15.889 }, 00:16:15.889 { 00:16:15.889 "name": "pt3", 00:16:15.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.889 "is_configured": true, 00:16:15.889 "data_offset": 2048, 00:16:15.889 "data_size": 63488 00:16:15.889 } 00:16:15.889 ] 00:16:15.889 }' 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.889 14:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.455 [2024-11-04 14:48:46.126387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.455 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.455 "name": "raid_bdev1", 00:16:16.455 "aliases": [ 00:16:16.455 "517d03e4-9682-428c-b1dc-7256eaf98616" 00:16:16.455 ], 00:16:16.455 "product_name": "Raid Volume", 00:16:16.455 "block_size": 512, 00:16:16.455 "num_blocks": 190464, 00:16:16.455 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:16.455 "assigned_rate_limits": { 00:16:16.455 "rw_ios_per_sec": 0, 00:16:16.455 "rw_mbytes_per_sec": 0, 00:16:16.455 "r_mbytes_per_sec": 0, 00:16:16.455 "w_mbytes_per_sec": 0 00:16:16.455 }, 00:16:16.455 "claimed": false, 00:16:16.455 "zoned": false, 00:16:16.455 "supported_io_types": { 00:16:16.455 "read": true, 00:16:16.455 "write": true, 00:16:16.455 "unmap": true, 00:16:16.455 "flush": true, 00:16:16.455 "reset": true, 00:16:16.455 "nvme_admin": false, 00:16:16.455 "nvme_io": false, 00:16:16.455 "nvme_io_md": false, 00:16:16.455 "write_zeroes": true, 00:16:16.455 "zcopy": false, 00:16:16.455 "get_zone_info": false, 00:16:16.455 "zone_management": false, 00:16:16.455 "zone_append": false, 00:16:16.455 "compare": false, 00:16:16.455 "compare_and_write": false, 00:16:16.455 "abort": false, 00:16:16.455 "seek_hole": false, 00:16:16.455 "seek_data": false, 00:16:16.455 "copy": false, 00:16:16.455 "nvme_iov_md": false 00:16:16.455 }, 00:16:16.455 "memory_domains": [ 00:16:16.455 { 00:16:16.455 "dma_device_id": "system", 00:16:16.455 "dma_device_type": 1 00:16:16.455 }, 00:16:16.455 { 00:16:16.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.455 "dma_device_type": 2 00:16:16.455 }, 00:16:16.455 { 00:16:16.455 "dma_device_id": "system", 00:16:16.455 "dma_device_type": 1 00:16:16.455 }, 00:16:16.455 { 00:16:16.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.455 "dma_device_type": 2 00:16:16.455 }, 00:16:16.455 { 00:16:16.455 "dma_device_id": "system", 00:16:16.455 "dma_device_type": 1 00:16:16.455 }, 00:16:16.455 { 00:16:16.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.455 "dma_device_type": 2 00:16:16.455 } 00:16:16.455 ], 00:16:16.455 "driver_specific": { 00:16:16.455 "raid": { 00:16:16.456 "uuid": "517d03e4-9682-428c-b1dc-7256eaf98616", 00:16:16.456 "strip_size_kb": 64, 00:16:16.456 "state": "online", 00:16:16.456 "raid_level": "concat", 00:16:16.456 "superblock": true, 00:16:16.456 "num_base_bdevs": 3, 00:16:16.456 "num_base_bdevs_discovered": 3, 00:16:16.456 "num_base_bdevs_operational": 3, 00:16:16.456 "base_bdevs_list": [ 00:16:16.456 { 00:16:16.456 "name": "pt1", 00:16:16.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.456 "is_configured": true, 00:16:16.456 "data_offset": 2048, 00:16:16.456 "data_size": 63488 00:16:16.456 }, 00:16:16.456 { 00:16:16.456 "name": "pt2", 00:16:16.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.456 "is_configured": true, 00:16:16.456 "data_offset": 2048, 00:16:16.456 "data_size": 63488 00:16:16.456 }, 00:16:16.456 { 00:16:16.456 "name": "pt3", 00:16:16.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.456 "is_configured": true, 00:16:16.456 "data_offset": 2048, 00:16:16.456 "data_size": 63488 00:16:16.456 } 00:16:16.456 ] 00:16:16.456 } 00:16:16.456 } 00:16:16.456 }' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.456 pt2 00:16:16.456 pt3' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.456 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.714 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:16.715 [2024-11-04 14:48:46.470417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 517d03e4-9682-428c-b1dc-7256eaf98616 '!=' 517d03e4-9682-428c-b1dc-7256eaf98616 ']' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67010 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67010 ']' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67010 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67010 00:16:16.715 killing process with pid 67010 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67010' 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67010 00:16:16.715 [2024-11-04 14:48:46.552021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.715 14:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67010 00:16:16.715 [2024-11-04 14:48:46.552171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.715 [2024-11-04 14:48:46.552277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.715 [2024-11-04 14:48:46.552299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:16.973 [2024-11-04 14:48:46.846773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.347 14:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:18.347 00:16:18.347 real 0m5.801s 00:16:18.347 user 0m8.585s 00:16:18.347 sys 0m0.881s 00:16:18.347 14:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:18.347 ************************************ 00:16:18.347 END TEST raid_superblock_test 00:16:18.347 ************************************ 00:16:18.347 14:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.347 14:48:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:16:18.347 14:48:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:18.347 14:48:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:18.347 14:48:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.347 ************************************ 00:16:18.347 START TEST raid_read_error_test 00:16:18.347 ************************************ 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8s2wa01Edg 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67263 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67263 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67263 ']' 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:18.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:18.347 14:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.347 [2024-11-04 14:48:48.175468] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:18.347 [2024-11-04 14:48:48.175670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67263 ] 00:16:18.605 [2024-11-04 14:48:48.361371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.863 [2024-11-04 14:48:48.510533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.863 [2024-11-04 14:48:48.742267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.863 [2024-11-04 14:48:48.742644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.428 BaseBdev1_malloc 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.428 true 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.428 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.428 [2024-11-04 14:48:49.242971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:19.428 [2024-11-04 14:48:49.243062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.428 [2024-11-04 14:48:49.243096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:19.428 [2024-11-04 14:48:49.243115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.428 [2024-11-04 14:48:49.246215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.429 [2024-11-04 14:48:49.246279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:19.429 BaseBdev1 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.429 BaseBdev2_malloc 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.429 true 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.429 [2024-11-04 14:48:49.303673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:19.429 [2024-11-04 14:48:49.303775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.429 [2024-11-04 14:48:49.303803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:19.429 [2024-11-04 14:48:49.303822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.429 [2024-11-04 14:48:49.306890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.429 [2024-11-04 14:48:49.306940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:19.429 BaseBdev2 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.429 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.686 BaseBdev3_malloc 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.686 true 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.686 [2024-11-04 14:48:49.381964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:19.686 [2024-11-04 14:48:49.382311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.686 [2024-11-04 14:48:49.382350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:19.686 [2024-11-04 14:48:49.382370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.686 [2024-11-04 14:48:49.385392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.686 [2024-11-04 14:48:49.385571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:19.686 BaseBdev3 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.686 [2024-11-04 14:48:49.390268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.686 [2024-11-04 14:48:49.392829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.686 [2024-11-04 14:48:49.392944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.686 [2024-11-04 14:48:49.393220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:19.686 [2024-11-04 14:48:49.393268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.686 [2024-11-04 14:48:49.393615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:19.686 [2024-11-04 14:48:49.393828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:19.686 [2024-11-04 14:48:49.393852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:19.686 [2024-11-04 14:48:49.394044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.686 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.686 "name": "raid_bdev1", 00:16:19.686 "uuid": "2c40993c-6ff5-4947-9045-b7816ef5620c", 00:16:19.686 "strip_size_kb": 64, 00:16:19.686 "state": "online", 00:16:19.687 "raid_level": "concat", 00:16:19.687 "superblock": true, 00:16:19.687 "num_base_bdevs": 3, 00:16:19.687 "num_base_bdevs_discovered": 3, 00:16:19.687 "num_base_bdevs_operational": 3, 00:16:19.687 "base_bdevs_list": [ 00:16:19.687 { 00:16:19.687 "name": "BaseBdev1", 00:16:19.687 "uuid": "d0fb7300-5639-5c2d-ae30-b8f57ac7e76c", 00:16:19.687 "is_configured": true, 00:16:19.687 "data_offset": 2048, 00:16:19.687 "data_size": 63488 00:16:19.687 }, 00:16:19.687 { 00:16:19.687 "name": "BaseBdev2", 00:16:19.687 "uuid": "e2ece8d7-1665-5e41-a4a8-0c7a80a2b8be", 00:16:19.687 "is_configured": true, 00:16:19.687 "data_offset": 2048, 00:16:19.687 "data_size": 63488 00:16:19.687 }, 00:16:19.687 { 00:16:19.687 "name": "BaseBdev3", 00:16:19.687 "uuid": "afd1e11c-1a4d-59a6-8691-b5f03f207012", 00:16:19.687 "is_configured": true, 00:16:19.687 "data_offset": 2048, 00:16:19.687 "data_size": 63488 00:16:19.687 } 00:16:19.687 ] 00:16:19.687 }' 00:16:19.687 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.687 14:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.268 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:20.268 14:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:20.268 [2024-11-04 14:48:50.032111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.200 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.201 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.201 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.201 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.201 "name": "raid_bdev1", 00:16:21.201 "uuid": "2c40993c-6ff5-4947-9045-b7816ef5620c", 00:16:21.201 "strip_size_kb": 64, 00:16:21.201 "state": "online", 00:16:21.201 "raid_level": "concat", 00:16:21.201 "superblock": true, 00:16:21.201 "num_base_bdevs": 3, 00:16:21.201 "num_base_bdevs_discovered": 3, 00:16:21.201 "num_base_bdevs_operational": 3, 00:16:21.201 "base_bdevs_list": [ 00:16:21.201 { 00:16:21.201 "name": "BaseBdev1", 00:16:21.201 "uuid": "d0fb7300-5639-5c2d-ae30-b8f57ac7e76c", 00:16:21.201 "is_configured": true, 00:16:21.201 "data_offset": 2048, 00:16:21.201 "data_size": 63488 00:16:21.201 }, 00:16:21.201 { 00:16:21.201 "name": "BaseBdev2", 00:16:21.201 "uuid": "e2ece8d7-1665-5e41-a4a8-0c7a80a2b8be", 00:16:21.201 "is_configured": true, 00:16:21.201 "data_offset": 2048, 00:16:21.201 "data_size": 63488 00:16:21.201 }, 00:16:21.201 { 00:16:21.201 "name": "BaseBdev3", 00:16:21.201 "uuid": "afd1e11c-1a4d-59a6-8691-b5f03f207012", 00:16:21.201 "is_configured": true, 00:16:21.201 "data_offset": 2048, 00:16:21.201 "data_size": 63488 00:16:21.201 } 00:16:21.201 ] 00:16:21.201 }' 00:16:21.201 14:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.201 14:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.766 [2024-11-04 14:48:51.467039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.766 [2024-11-04 14:48:51.467328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.766 [2024-11-04 14:48:51.471133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.766 { 00:16:21.766 "results": [ 00:16:21.766 { 00:16:21.766 "job": "raid_bdev1", 00:16:21.766 "core_mask": "0x1", 00:16:21.766 "workload": "randrw", 00:16:21.766 "percentage": 50, 00:16:21.766 "status": "finished", 00:16:21.766 "queue_depth": 1, 00:16:21.766 "io_size": 131072, 00:16:21.766 "runtime": 1.432513, 00:16:21.766 "iops": 9665.531831124743, 00:16:21.766 "mibps": 1208.1914788905929, 00:16:21.766 "io_failed": 1, 00:16:21.766 "io_timeout": 0, 00:16:21.766 "avg_latency_us": 146.04460749620858, 00:16:21.766 "min_latency_us": 42.82181818181818, 00:16:21.766 "max_latency_us": 1906.5018181818182 00:16:21.766 } 00:16:21.766 ], 00:16:21.766 "core_count": 1 00:16:21.766 } 00:16:21.766 [2024-11-04 14:48:51.471400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.766 [2024-11-04 14:48:51.471476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.766 [2024-11-04 14:48:51.471498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67263 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67263 ']' 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67263 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67263 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67263' 00:16:21.766 killing process with pid 67263 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67263 00:16:21.766 [2024-11-04 14:48:51.513990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.766 14:48:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67263 00:16:22.024 [2024-11-04 14:48:51.756355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8s2wa01Edg 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:16:23.396 00:16:23.396 real 0m4.959s 00:16:23.396 user 0m6.059s 00:16:23.396 sys 0m0.653s 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.396 14:48:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.396 ************************************ 00:16:23.396 END TEST raid_read_error_test 00:16:23.396 ************************************ 00:16:23.396 14:48:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:16:23.396 14:48:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:23.396 14:48:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.396 14:48:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.396 ************************************ 00:16:23.396 START TEST raid_write_error_test 00:16:23.396 ************************************ 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v1JMMEotsz 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67414 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67414 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67414 ']' 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.396 14:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.396 [2024-11-04 14:48:53.189089] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:23.396 [2024-11-04 14:48:53.189549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67414 ] 00:16:23.656 [2024-11-04 14:48:53.368300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.656 [2024-11-04 14:48:53.514118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.914 [2024-11-04 14:48:53.741804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.914 [2024-11-04 14:48:53.741881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 BaseBdev1_malloc 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 true 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 [2024-11-04 14:48:54.295584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:24.479 [2024-11-04 14:48:54.295677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.479 [2024-11-04 14:48:54.295715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:24.479 [2024-11-04 14:48:54.295734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.479 [2024-11-04 14:48:54.298928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.479 [2024-11-04 14:48:54.298982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.479 BaseBdev1 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 BaseBdev2_malloc 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.479 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:24.480 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.480 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.480 true 00:16:24.480 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.480 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:24.480 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.480 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.480 [2024-11-04 14:48:54.364259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:24.480 [2024-11-04 14:48:54.364488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.480 [2024-11-04 14:48:54.364531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:24.480 [2024-11-04 14:48:54.364551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.480 [2024-11-04 14:48:54.367702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.480 [2024-11-04 14:48:54.367888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:24.480 BaseBdev2 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.737 BaseBdev3_malloc 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.737 true 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.737 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.737 [2024-11-04 14:48:54.441760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:24.737 [2024-11-04 14:48:54.441968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.737 [2024-11-04 14:48:54.442039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:24.738 [2024-11-04 14:48:54.442147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.738 [2024-11-04 14:48:54.445285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.738 [2024-11-04 14:48:54.445454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:24.738 BaseBdev3 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.738 [2024-11-04 14:48:54.453952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.738 [2024-11-04 14:48:54.456559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.738 [2024-11-04 14:48:54.456809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.738 [2024-11-04 14:48:54.457114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:24.738 [2024-11-04 14:48:54.457134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.738 [2024-11-04 14:48:54.457548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:24.738 [2024-11-04 14:48:54.457776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:24.738 [2024-11-04 14:48:54.457801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:24.738 [2024-11-04 14:48:54.458058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.738 "name": "raid_bdev1", 00:16:24.738 "uuid": "c3c9e080-22ae-4343-8852-6b5721668ea7", 00:16:24.738 "strip_size_kb": 64, 00:16:24.738 "state": "online", 00:16:24.738 "raid_level": "concat", 00:16:24.738 "superblock": true, 00:16:24.738 "num_base_bdevs": 3, 00:16:24.738 "num_base_bdevs_discovered": 3, 00:16:24.738 "num_base_bdevs_operational": 3, 00:16:24.738 "base_bdevs_list": [ 00:16:24.738 { 00:16:24.738 "name": "BaseBdev1", 00:16:24.738 "uuid": "3ea0d38a-92fd-56b2-8a9e-b4bd1b1b44b2", 00:16:24.738 "is_configured": true, 00:16:24.738 "data_offset": 2048, 00:16:24.738 "data_size": 63488 00:16:24.738 }, 00:16:24.738 { 00:16:24.738 "name": "BaseBdev2", 00:16:24.738 "uuid": "f8c61f9c-175b-52d5-8a73-582f2f4c7f23", 00:16:24.738 "is_configured": true, 00:16:24.738 "data_offset": 2048, 00:16:24.738 "data_size": 63488 00:16:24.738 }, 00:16:24.738 { 00:16:24.738 "name": "BaseBdev3", 00:16:24.738 "uuid": "34453c3d-b94d-5da5-8b1f-d9c5f12609b9", 00:16:24.738 "is_configured": true, 00:16:24.738 "data_offset": 2048, 00:16:24.738 "data_size": 63488 00:16:24.738 } 00:16:24.738 ] 00:16:24.738 }' 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.738 14:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.302 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:25.302 14:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:25.302 [2024-11-04 14:48:55.127760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:26.236 14:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:26.236 14:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.236 14:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.236 "name": "raid_bdev1", 00:16:26.236 "uuid": "c3c9e080-22ae-4343-8852-6b5721668ea7", 00:16:26.236 "strip_size_kb": 64, 00:16:26.236 "state": "online", 00:16:26.236 "raid_level": "concat", 00:16:26.236 "superblock": true, 00:16:26.236 "num_base_bdevs": 3, 00:16:26.236 "num_base_bdevs_discovered": 3, 00:16:26.236 "num_base_bdevs_operational": 3, 00:16:26.236 "base_bdevs_list": [ 00:16:26.236 { 00:16:26.236 "name": "BaseBdev1", 00:16:26.236 "uuid": "3ea0d38a-92fd-56b2-8a9e-b4bd1b1b44b2", 00:16:26.236 "is_configured": true, 00:16:26.236 "data_offset": 2048, 00:16:26.236 "data_size": 63488 00:16:26.236 }, 00:16:26.236 { 00:16:26.236 "name": "BaseBdev2", 00:16:26.236 "uuid": "f8c61f9c-175b-52d5-8a73-582f2f4c7f23", 00:16:26.236 "is_configured": true, 00:16:26.236 "data_offset": 2048, 00:16:26.236 "data_size": 63488 00:16:26.236 }, 00:16:26.236 { 00:16:26.236 "name": "BaseBdev3", 00:16:26.236 "uuid": "34453c3d-b94d-5da5-8b1f-d9c5f12609b9", 00:16:26.236 "is_configured": true, 00:16:26.236 "data_offset": 2048, 00:16:26.236 "data_size": 63488 00:16:26.236 } 00:16:26.236 ] 00:16:26.236 }' 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.236 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.802 [2024-11-04 14:48:56.521875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.802 [2024-11-04 14:48:56.521919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.802 [2024-11-04 14:48:56.525351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.802 [2024-11-04 14:48:56.525687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.802 [2024-11-04 14:48:56.525780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.802 [2024-11-04 14:48:56.525802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:26.802 { 00:16:26.802 "results": [ 00:16:26.802 { 00:16:26.802 "job": "raid_bdev1", 00:16:26.802 "core_mask": "0x1", 00:16:26.802 "workload": "randrw", 00:16:26.802 "percentage": 50, 00:16:26.802 "status": "finished", 00:16:26.802 "queue_depth": 1, 00:16:26.802 "io_size": 131072, 00:16:26.802 "runtime": 1.391105, 00:16:26.802 "iops": 9583.02931841953, 00:16:26.802 "mibps": 1197.8786648024413, 00:16:26.802 "io_failed": 1, 00:16:26.802 "io_timeout": 0, 00:16:26.802 "avg_latency_us": 146.73485966778495, 00:16:26.802 "min_latency_us": 44.21818181818182, 00:16:26.802 "max_latency_us": 1846.9236363636364 00:16:26.802 } 00:16:26.802 ], 00:16:26.802 "core_count": 1 00:16:26.802 } 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67414 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67414 ']' 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67414 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67414 00:16:26.802 killing process with pid 67414 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67414' 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67414 00:16:26.802 14:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67414 00:16:26.802 [2024-11-04 14:48:56.560950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.060 [2024-11-04 14:48:56.794112] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.432 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v1JMMEotsz 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:16:28.433 00:16:28.433 real 0m4.957s 00:16:28.433 user 0m6.098s 00:16:28.433 sys 0m0.654s 00:16:28.433 ************************************ 00:16:28.433 END TEST raid_write_error_test 00:16:28.433 ************************************ 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:28.433 14:48:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 14:48:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:28.433 14:48:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:28.433 14:48:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:28.433 14:48:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:28.433 14:48:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 ************************************ 00:16:28.433 START TEST raid_state_function_test 00:16:28.433 ************************************ 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:28.433 Process raid pid: 67558 00:16:28.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67558 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67558' 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67558 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67558 ']' 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:28.433 14:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 [2024-11-04 14:48:58.195296] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:28.433 [2024-11-04 14:48:58.195686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.691 [2024-11-04 14:48:58.384234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.691 [2024-11-04 14:48:58.534491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.950 [2024-11-04 14:48:58.768710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.950 [2024-11-04 14:48:58.769084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.517 [2024-11-04 14:48:59.163093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.517 [2024-11-04 14:48:59.163178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.517 [2024-11-04 14:48:59.163197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.517 [2024-11-04 14:48:59.163233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.517 [2024-11-04 14:48:59.163251] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.517 [2024-11-04 14:48:59.163267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.517 "name": "Existed_Raid", 00:16:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.517 "strip_size_kb": 0, 00:16:29.517 "state": "configuring", 00:16:29.517 "raid_level": "raid1", 00:16:29.517 "superblock": false, 00:16:29.517 "num_base_bdevs": 3, 00:16:29.517 "num_base_bdevs_discovered": 0, 00:16:29.517 "num_base_bdevs_operational": 3, 00:16:29.517 "base_bdevs_list": [ 00:16:29.517 { 00:16:29.517 "name": "BaseBdev1", 00:16:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.517 "is_configured": false, 00:16:29.517 "data_offset": 0, 00:16:29.517 "data_size": 0 00:16:29.517 }, 00:16:29.517 { 00:16:29.517 "name": "BaseBdev2", 00:16:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.517 "is_configured": false, 00:16:29.517 "data_offset": 0, 00:16:29.517 "data_size": 0 00:16:29.517 }, 00:16:29.517 { 00:16:29.517 "name": "BaseBdev3", 00:16:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.517 "is_configured": false, 00:16:29.517 "data_offset": 0, 00:16:29.517 "data_size": 0 00:16:29.517 } 00:16:29.517 ] 00:16:29.517 }' 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.517 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 [2024-11-04 14:48:59.751173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.084 [2024-11-04 14:48:59.751386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 [2024-11-04 14:48:59.759121] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.084 [2024-11-04 14:48:59.759180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.084 [2024-11-04 14:48:59.759198] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.084 [2024-11-04 14:48:59.759215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.084 [2024-11-04 14:48:59.759243] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.084 [2024-11-04 14:48:59.759263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 [2024-11-04 14:48:59.808828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.084 BaseBdev1 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 [ 00:16:30.084 { 00:16:30.084 "name": "BaseBdev1", 00:16:30.084 "aliases": [ 00:16:30.084 "5259c509-6944-44a9-9972-a99e3d249443" 00:16:30.084 ], 00:16:30.084 "product_name": "Malloc disk", 00:16:30.084 "block_size": 512, 00:16:30.084 "num_blocks": 65536, 00:16:30.084 "uuid": "5259c509-6944-44a9-9972-a99e3d249443", 00:16:30.084 "assigned_rate_limits": { 00:16:30.084 "rw_ios_per_sec": 0, 00:16:30.084 "rw_mbytes_per_sec": 0, 00:16:30.084 "r_mbytes_per_sec": 0, 00:16:30.084 "w_mbytes_per_sec": 0 00:16:30.084 }, 00:16:30.084 "claimed": true, 00:16:30.084 "claim_type": "exclusive_write", 00:16:30.084 "zoned": false, 00:16:30.084 "supported_io_types": { 00:16:30.084 "read": true, 00:16:30.084 "write": true, 00:16:30.084 "unmap": true, 00:16:30.084 "flush": true, 00:16:30.084 "reset": true, 00:16:30.084 "nvme_admin": false, 00:16:30.084 "nvme_io": false, 00:16:30.084 "nvme_io_md": false, 00:16:30.084 "write_zeroes": true, 00:16:30.084 "zcopy": true, 00:16:30.084 "get_zone_info": false, 00:16:30.084 "zone_management": false, 00:16:30.084 "zone_append": false, 00:16:30.084 "compare": false, 00:16:30.084 "compare_and_write": false, 00:16:30.084 "abort": true, 00:16:30.084 "seek_hole": false, 00:16:30.084 "seek_data": false, 00:16:30.084 "copy": true, 00:16:30.084 "nvme_iov_md": false 00:16:30.084 }, 00:16:30.084 "memory_domains": [ 00:16:30.084 { 00:16:30.084 "dma_device_id": "system", 00:16:30.084 "dma_device_type": 1 00:16:30.084 }, 00:16:30.084 { 00:16:30.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.084 "dma_device_type": 2 00:16:30.084 } 00:16:30.084 ], 00:16:30.084 "driver_specific": {} 00:16:30.084 } 00:16:30.084 ] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.084 "name": "Existed_Raid", 00:16:30.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.084 "strip_size_kb": 0, 00:16:30.084 "state": "configuring", 00:16:30.084 "raid_level": "raid1", 00:16:30.084 "superblock": false, 00:16:30.084 "num_base_bdevs": 3, 00:16:30.084 "num_base_bdevs_discovered": 1, 00:16:30.084 "num_base_bdevs_operational": 3, 00:16:30.084 "base_bdevs_list": [ 00:16:30.084 { 00:16:30.084 "name": "BaseBdev1", 00:16:30.084 "uuid": "5259c509-6944-44a9-9972-a99e3d249443", 00:16:30.084 "is_configured": true, 00:16:30.084 "data_offset": 0, 00:16:30.084 "data_size": 65536 00:16:30.084 }, 00:16:30.084 { 00:16:30.084 "name": "BaseBdev2", 00:16:30.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.084 "is_configured": false, 00:16:30.084 "data_offset": 0, 00:16:30.084 "data_size": 0 00:16:30.084 }, 00:16:30.084 { 00:16:30.084 "name": "BaseBdev3", 00:16:30.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.084 "is_configured": false, 00:16:30.084 "data_offset": 0, 00:16:30.084 "data_size": 0 00:16:30.084 } 00:16:30.084 ] 00:16:30.084 }' 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.084 14:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.649 [2024-11-04 14:49:00.401041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.649 [2024-11-04 14:49:00.401123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.649 [2024-11-04 14:49:00.409084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.649 [2024-11-04 14:49:00.411616] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.649 [2024-11-04 14:49:00.411679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.649 [2024-11-04 14:49:00.411701] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.649 [2024-11-04 14:49:00.411722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.649 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.650 "name": "Existed_Raid", 00:16:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.650 "strip_size_kb": 0, 00:16:30.650 "state": "configuring", 00:16:30.650 "raid_level": "raid1", 00:16:30.650 "superblock": false, 00:16:30.650 "num_base_bdevs": 3, 00:16:30.650 "num_base_bdevs_discovered": 1, 00:16:30.650 "num_base_bdevs_operational": 3, 00:16:30.650 "base_bdevs_list": [ 00:16:30.650 { 00:16:30.650 "name": "BaseBdev1", 00:16:30.650 "uuid": "5259c509-6944-44a9-9972-a99e3d249443", 00:16:30.650 "is_configured": true, 00:16:30.650 "data_offset": 0, 00:16:30.650 "data_size": 65536 00:16:30.650 }, 00:16:30.650 { 00:16:30.650 "name": "BaseBdev2", 00:16:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.650 "is_configured": false, 00:16:30.650 "data_offset": 0, 00:16:30.650 "data_size": 0 00:16:30.650 }, 00:16:30.650 { 00:16:30.650 "name": "BaseBdev3", 00:16:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.650 "is_configured": false, 00:16:30.650 "data_offset": 0, 00:16:30.650 "data_size": 0 00:16:30.650 } 00:16:30.650 ] 00:16:30.650 }' 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.650 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 [2024-11-04 14:49:00.980547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.216 BaseBdev2 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.216 14:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 [ 00:16:31.216 { 00:16:31.216 "name": "BaseBdev2", 00:16:31.216 "aliases": [ 00:16:31.216 "4a97b729-2240-412b-a178-f0c468581566" 00:16:31.216 ], 00:16:31.216 "product_name": "Malloc disk", 00:16:31.216 "block_size": 512, 00:16:31.216 "num_blocks": 65536, 00:16:31.216 "uuid": "4a97b729-2240-412b-a178-f0c468581566", 00:16:31.216 "assigned_rate_limits": { 00:16:31.216 "rw_ios_per_sec": 0, 00:16:31.216 "rw_mbytes_per_sec": 0, 00:16:31.216 "r_mbytes_per_sec": 0, 00:16:31.216 "w_mbytes_per_sec": 0 00:16:31.216 }, 00:16:31.216 "claimed": true, 00:16:31.216 "claim_type": "exclusive_write", 00:16:31.216 "zoned": false, 00:16:31.216 "supported_io_types": { 00:16:31.216 "read": true, 00:16:31.216 "write": true, 00:16:31.216 "unmap": true, 00:16:31.216 "flush": true, 00:16:31.216 "reset": true, 00:16:31.216 "nvme_admin": false, 00:16:31.216 "nvme_io": false, 00:16:31.216 "nvme_io_md": false, 00:16:31.216 "write_zeroes": true, 00:16:31.216 "zcopy": true, 00:16:31.216 "get_zone_info": false, 00:16:31.216 "zone_management": false, 00:16:31.216 "zone_append": false, 00:16:31.216 "compare": false, 00:16:31.216 "compare_and_write": false, 00:16:31.216 "abort": true, 00:16:31.216 "seek_hole": false, 00:16:31.216 "seek_data": false, 00:16:31.216 "copy": true, 00:16:31.216 "nvme_iov_md": false 00:16:31.216 }, 00:16:31.216 "memory_domains": [ 00:16:31.216 { 00:16:31.216 "dma_device_id": "system", 00:16:31.216 "dma_device_type": 1 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.216 "dma_device_type": 2 00:16:31.216 } 00:16:31.216 ], 00:16:31.216 "driver_specific": {} 00:16:31.216 } 00:16:31.216 ] 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.216 "name": "Existed_Raid", 00:16:31.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.216 "strip_size_kb": 0, 00:16:31.216 "state": "configuring", 00:16:31.216 "raid_level": "raid1", 00:16:31.216 "superblock": false, 00:16:31.216 "num_base_bdevs": 3, 00:16:31.216 "num_base_bdevs_discovered": 2, 00:16:31.216 "num_base_bdevs_operational": 3, 00:16:31.216 "base_bdevs_list": [ 00:16:31.216 { 00:16:31.216 "name": "BaseBdev1", 00:16:31.216 "uuid": "5259c509-6944-44a9-9972-a99e3d249443", 00:16:31.216 "is_configured": true, 00:16:31.216 "data_offset": 0, 00:16:31.216 "data_size": 65536 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "name": "BaseBdev2", 00:16:31.216 "uuid": "4a97b729-2240-412b-a178-f0c468581566", 00:16:31.216 "is_configured": true, 00:16:31.216 "data_offset": 0, 00:16:31.216 "data_size": 65536 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "name": "BaseBdev3", 00:16:31.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.216 "is_configured": false, 00:16:31.216 "data_offset": 0, 00:16:31.216 "data_size": 0 00:16:31.216 } 00:16:31.216 ] 00:16:31.216 }' 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.216 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.783 [2024-11-04 14:49:01.597125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.783 [2024-11-04 14:49:01.597474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.783 [2024-11-04 14:49:01.597514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:31.783 [2024-11-04 14:49:01.597903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:31.783 [2024-11-04 14:49:01.598167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.783 [2024-11-04 14:49:01.598186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:31.783 [2024-11-04 14:49:01.598559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.783 BaseBdev3 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.783 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.783 [ 00:16:31.783 { 00:16:31.783 "name": "BaseBdev3", 00:16:31.783 "aliases": [ 00:16:31.783 "8ef43171-4698-4163-b127-50a3fca046de" 00:16:31.783 ], 00:16:31.783 "product_name": "Malloc disk", 00:16:31.783 "block_size": 512, 00:16:31.783 "num_blocks": 65536, 00:16:31.783 "uuid": "8ef43171-4698-4163-b127-50a3fca046de", 00:16:31.783 "assigned_rate_limits": { 00:16:31.783 "rw_ios_per_sec": 0, 00:16:31.783 "rw_mbytes_per_sec": 0, 00:16:31.783 "r_mbytes_per_sec": 0, 00:16:31.783 "w_mbytes_per_sec": 0 00:16:31.784 }, 00:16:31.784 "claimed": true, 00:16:31.784 "claim_type": "exclusive_write", 00:16:31.784 "zoned": false, 00:16:31.784 "supported_io_types": { 00:16:31.784 "read": true, 00:16:31.784 "write": true, 00:16:31.784 "unmap": true, 00:16:31.784 "flush": true, 00:16:31.784 "reset": true, 00:16:31.784 "nvme_admin": false, 00:16:31.784 "nvme_io": false, 00:16:31.784 "nvme_io_md": false, 00:16:31.784 "write_zeroes": true, 00:16:31.784 "zcopy": true, 00:16:31.784 "get_zone_info": false, 00:16:31.784 "zone_management": false, 00:16:31.784 "zone_append": false, 00:16:31.784 "compare": false, 00:16:31.784 "compare_and_write": false, 00:16:31.784 "abort": true, 00:16:31.784 "seek_hole": false, 00:16:31.784 "seek_data": false, 00:16:31.784 "copy": true, 00:16:31.784 "nvme_iov_md": false 00:16:31.784 }, 00:16:31.784 "memory_domains": [ 00:16:31.784 { 00:16:31.784 "dma_device_id": "system", 00:16:31.784 "dma_device_type": 1 00:16:31.784 }, 00:16:31.784 { 00:16:31.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.784 "dma_device_type": 2 00:16:31.784 } 00:16:31.784 ], 00:16:31.784 "driver_specific": {} 00:16:31.784 } 00:16:31.784 ] 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.784 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.042 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.042 "name": "Existed_Raid", 00:16:32.042 "uuid": "d6e6852d-e24e-495b-960b-ea94c23971fe", 00:16:32.042 "strip_size_kb": 0, 00:16:32.042 "state": "online", 00:16:32.042 "raid_level": "raid1", 00:16:32.042 "superblock": false, 00:16:32.042 "num_base_bdevs": 3, 00:16:32.042 "num_base_bdevs_discovered": 3, 00:16:32.042 "num_base_bdevs_operational": 3, 00:16:32.042 "base_bdevs_list": [ 00:16:32.042 { 00:16:32.042 "name": "BaseBdev1", 00:16:32.042 "uuid": "5259c509-6944-44a9-9972-a99e3d249443", 00:16:32.042 "is_configured": true, 00:16:32.042 "data_offset": 0, 00:16:32.042 "data_size": 65536 00:16:32.042 }, 00:16:32.042 { 00:16:32.042 "name": "BaseBdev2", 00:16:32.042 "uuid": "4a97b729-2240-412b-a178-f0c468581566", 00:16:32.042 "is_configured": true, 00:16:32.042 "data_offset": 0, 00:16:32.042 "data_size": 65536 00:16:32.042 }, 00:16:32.042 { 00:16:32.042 "name": "BaseBdev3", 00:16:32.042 "uuid": "8ef43171-4698-4163-b127-50a3fca046de", 00:16:32.042 "is_configured": true, 00:16:32.042 "data_offset": 0, 00:16:32.042 "data_size": 65536 00:16:32.042 } 00:16:32.042 ] 00:16:32.042 }' 00:16:32.042 14:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.042 14:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.300 [2024-11-04 14:49:02.153763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.300 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.559 "name": "Existed_Raid", 00:16:32.559 "aliases": [ 00:16:32.559 "d6e6852d-e24e-495b-960b-ea94c23971fe" 00:16:32.559 ], 00:16:32.559 "product_name": "Raid Volume", 00:16:32.559 "block_size": 512, 00:16:32.559 "num_blocks": 65536, 00:16:32.559 "uuid": "d6e6852d-e24e-495b-960b-ea94c23971fe", 00:16:32.559 "assigned_rate_limits": { 00:16:32.559 "rw_ios_per_sec": 0, 00:16:32.559 "rw_mbytes_per_sec": 0, 00:16:32.559 "r_mbytes_per_sec": 0, 00:16:32.559 "w_mbytes_per_sec": 0 00:16:32.559 }, 00:16:32.559 "claimed": false, 00:16:32.559 "zoned": false, 00:16:32.559 "supported_io_types": { 00:16:32.559 "read": true, 00:16:32.559 "write": true, 00:16:32.559 "unmap": false, 00:16:32.559 "flush": false, 00:16:32.559 "reset": true, 00:16:32.559 "nvme_admin": false, 00:16:32.559 "nvme_io": false, 00:16:32.559 "nvme_io_md": false, 00:16:32.559 "write_zeroes": true, 00:16:32.559 "zcopy": false, 00:16:32.559 "get_zone_info": false, 00:16:32.559 "zone_management": false, 00:16:32.559 "zone_append": false, 00:16:32.559 "compare": false, 00:16:32.559 "compare_and_write": false, 00:16:32.559 "abort": false, 00:16:32.559 "seek_hole": false, 00:16:32.559 "seek_data": false, 00:16:32.559 "copy": false, 00:16:32.559 "nvme_iov_md": false 00:16:32.559 }, 00:16:32.559 "memory_domains": [ 00:16:32.559 { 00:16:32.559 "dma_device_id": "system", 00:16:32.559 "dma_device_type": 1 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.559 "dma_device_type": 2 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "dma_device_id": "system", 00:16:32.559 "dma_device_type": 1 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.559 "dma_device_type": 2 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "dma_device_id": "system", 00:16:32.559 "dma_device_type": 1 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.559 "dma_device_type": 2 00:16:32.559 } 00:16:32.559 ], 00:16:32.559 "driver_specific": { 00:16:32.559 "raid": { 00:16:32.559 "uuid": "d6e6852d-e24e-495b-960b-ea94c23971fe", 00:16:32.559 "strip_size_kb": 0, 00:16:32.559 "state": "online", 00:16:32.559 "raid_level": "raid1", 00:16:32.559 "superblock": false, 00:16:32.559 "num_base_bdevs": 3, 00:16:32.559 "num_base_bdevs_discovered": 3, 00:16:32.559 "num_base_bdevs_operational": 3, 00:16:32.559 "base_bdevs_list": [ 00:16:32.559 { 00:16:32.559 "name": "BaseBdev1", 00:16:32.559 "uuid": "5259c509-6944-44a9-9972-a99e3d249443", 00:16:32.559 "is_configured": true, 00:16:32.559 "data_offset": 0, 00:16:32.559 "data_size": 65536 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "name": "BaseBdev2", 00:16:32.559 "uuid": "4a97b729-2240-412b-a178-f0c468581566", 00:16:32.559 "is_configured": true, 00:16:32.559 "data_offset": 0, 00:16:32.559 "data_size": 65536 00:16:32.559 }, 00:16:32.559 { 00:16:32.559 "name": "BaseBdev3", 00:16:32.559 "uuid": "8ef43171-4698-4163-b127-50a3fca046de", 00:16:32.559 "is_configured": true, 00:16:32.559 "data_offset": 0, 00:16:32.559 "data_size": 65536 00:16:32.559 } 00:16:32.559 ] 00:16:32.559 } 00:16:32.559 } 00:16:32.559 }' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.559 BaseBdev2 00:16:32.559 BaseBdev3' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.559 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.560 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.560 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.560 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.560 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.560 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.560 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.818 [2024-11-04 14:49:02.465531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.818 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.819 "name": "Existed_Raid", 00:16:32.819 "uuid": "d6e6852d-e24e-495b-960b-ea94c23971fe", 00:16:32.819 "strip_size_kb": 0, 00:16:32.819 "state": "online", 00:16:32.819 "raid_level": "raid1", 00:16:32.819 "superblock": false, 00:16:32.819 "num_base_bdevs": 3, 00:16:32.819 "num_base_bdevs_discovered": 2, 00:16:32.819 "num_base_bdevs_operational": 2, 00:16:32.819 "base_bdevs_list": [ 00:16:32.819 { 00:16:32.819 "name": null, 00:16:32.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.819 "is_configured": false, 00:16:32.819 "data_offset": 0, 00:16:32.819 "data_size": 65536 00:16:32.819 }, 00:16:32.819 { 00:16:32.819 "name": "BaseBdev2", 00:16:32.819 "uuid": "4a97b729-2240-412b-a178-f0c468581566", 00:16:32.819 "is_configured": true, 00:16:32.819 "data_offset": 0, 00:16:32.819 "data_size": 65536 00:16:32.819 }, 00:16:32.819 { 00:16:32.819 "name": "BaseBdev3", 00:16:32.819 "uuid": "8ef43171-4698-4163-b127-50a3fca046de", 00:16:32.819 "is_configured": true, 00:16:32.819 "data_offset": 0, 00:16:32.819 "data_size": 65536 00:16:32.819 } 00:16:32.819 ] 00:16:32.819 }' 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.819 14:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 [2024-11-04 14:49:03.125715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.394 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 [2024-11-04 14:49:03.270703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.394 [2024-11-04 14:49:03.271007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.653 [2024-11-04 14:49:03.361373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.653 [2024-11-04 14:49:03.361644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.653 [2024-11-04 14:49:03.361820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 BaseBdev2 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 [ 00:16:33.653 { 00:16:33.653 "name": "BaseBdev2", 00:16:33.653 "aliases": [ 00:16:33.653 "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023" 00:16:33.653 ], 00:16:33.653 "product_name": "Malloc disk", 00:16:33.653 "block_size": 512, 00:16:33.653 "num_blocks": 65536, 00:16:33.653 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:33.653 "assigned_rate_limits": { 00:16:33.653 "rw_ios_per_sec": 0, 00:16:33.653 "rw_mbytes_per_sec": 0, 00:16:33.653 "r_mbytes_per_sec": 0, 00:16:33.653 "w_mbytes_per_sec": 0 00:16:33.653 }, 00:16:33.653 "claimed": false, 00:16:33.653 "zoned": false, 00:16:33.653 "supported_io_types": { 00:16:33.653 "read": true, 00:16:33.653 "write": true, 00:16:33.653 "unmap": true, 00:16:33.653 "flush": true, 00:16:33.653 "reset": true, 00:16:33.653 "nvme_admin": false, 00:16:33.653 "nvme_io": false, 00:16:33.653 "nvme_io_md": false, 00:16:33.653 "write_zeroes": true, 00:16:33.653 "zcopy": true, 00:16:33.653 "get_zone_info": false, 00:16:33.653 "zone_management": false, 00:16:33.653 "zone_append": false, 00:16:33.653 "compare": false, 00:16:33.653 "compare_and_write": false, 00:16:33.653 "abort": true, 00:16:33.653 "seek_hole": false, 00:16:33.653 "seek_data": false, 00:16:33.653 "copy": true, 00:16:33.653 "nvme_iov_md": false 00:16:33.653 }, 00:16:33.653 "memory_domains": [ 00:16:33.653 { 00:16:33.653 "dma_device_id": "system", 00:16:33.653 "dma_device_type": 1 00:16:33.653 }, 00:16:33.653 { 00:16:33.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.653 "dma_device_type": 2 00:16:33.653 } 00:16:33.653 ], 00:16:33.653 "driver_specific": {} 00:16:33.653 } 00:16:33.653 ] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.653 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.654 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.913 BaseBdev3 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.913 [ 00:16:33.913 { 00:16:33.913 "name": "BaseBdev3", 00:16:33.913 "aliases": [ 00:16:33.913 "75bcd87b-0c31-4445-a3e4-f09b0a3ad739" 00:16:33.913 ], 00:16:33.913 "product_name": "Malloc disk", 00:16:33.913 "block_size": 512, 00:16:33.913 "num_blocks": 65536, 00:16:33.913 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:33.913 "assigned_rate_limits": { 00:16:33.913 "rw_ios_per_sec": 0, 00:16:33.913 "rw_mbytes_per_sec": 0, 00:16:33.913 "r_mbytes_per_sec": 0, 00:16:33.913 "w_mbytes_per_sec": 0 00:16:33.913 }, 00:16:33.913 "claimed": false, 00:16:33.913 "zoned": false, 00:16:33.913 "supported_io_types": { 00:16:33.913 "read": true, 00:16:33.913 "write": true, 00:16:33.913 "unmap": true, 00:16:33.913 "flush": true, 00:16:33.913 "reset": true, 00:16:33.913 "nvme_admin": false, 00:16:33.913 "nvme_io": false, 00:16:33.913 "nvme_io_md": false, 00:16:33.913 "write_zeroes": true, 00:16:33.913 "zcopy": true, 00:16:33.913 "get_zone_info": false, 00:16:33.913 "zone_management": false, 00:16:33.913 "zone_append": false, 00:16:33.913 "compare": false, 00:16:33.913 "compare_and_write": false, 00:16:33.913 "abort": true, 00:16:33.913 "seek_hole": false, 00:16:33.913 "seek_data": false, 00:16:33.913 "copy": true, 00:16:33.913 "nvme_iov_md": false 00:16:33.913 }, 00:16:33.913 "memory_domains": [ 00:16:33.913 { 00:16:33.913 "dma_device_id": "system", 00:16:33.913 "dma_device_type": 1 00:16:33.913 }, 00:16:33.913 { 00:16:33.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.913 "dma_device_type": 2 00:16:33.913 } 00:16:33.913 ], 00:16:33.913 "driver_specific": {} 00:16:33.913 } 00:16:33.913 ] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.913 [2024-11-04 14:49:03.582730] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.913 [2024-11-04 14:49:03.582928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.913 [2024-11-04 14:49:03.582973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.913 [2024-11-04 14:49:03.585453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.913 "name": "Existed_Raid", 00:16:33.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.913 "strip_size_kb": 0, 00:16:33.913 "state": "configuring", 00:16:33.913 "raid_level": "raid1", 00:16:33.913 "superblock": false, 00:16:33.913 "num_base_bdevs": 3, 00:16:33.913 "num_base_bdevs_discovered": 2, 00:16:33.913 "num_base_bdevs_operational": 3, 00:16:33.913 "base_bdevs_list": [ 00:16:33.913 { 00:16:33.913 "name": "BaseBdev1", 00:16:33.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.913 "is_configured": false, 00:16:33.913 "data_offset": 0, 00:16:33.913 "data_size": 0 00:16:33.913 }, 00:16:33.913 { 00:16:33.913 "name": "BaseBdev2", 00:16:33.913 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:33.913 "is_configured": true, 00:16:33.913 "data_offset": 0, 00:16:33.913 "data_size": 65536 00:16:33.913 }, 00:16:33.913 { 00:16:33.913 "name": "BaseBdev3", 00:16:33.913 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:33.913 "is_configured": true, 00:16:33.913 "data_offset": 0, 00:16:33.913 "data_size": 65536 00:16:33.913 } 00:16:33.913 ] 00:16:33.913 }' 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.913 14:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.481 [2024-11-04 14:49:04.135061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.481 "name": "Existed_Raid", 00:16:34.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.481 "strip_size_kb": 0, 00:16:34.481 "state": "configuring", 00:16:34.481 "raid_level": "raid1", 00:16:34.481 "superblock": false, 00:16:34.481 "num_base_bdevs": 3, 00:16:34.481 "num_base_bdevs_discovered": 1, 00:16:34.481 "num_base_bdevs_operational": 3, 00:16:34.481 "base_bdevs_list": [ 00:16:34.481 { 00:16:34.481 "name": "BaseBdev1", 00:16:34.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.481 "is_configured": false, 00:16:34.481 "data_offset": 0, 00:16:34.481 "data_size": 0 00:16:34.481 }, 00:16:34.481 { 00:16:34.481 "name": null, 00:16:34.481 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:34.481 "is_configured": false, 00:16:34.481 "data_offset": 0, 00:16:34.481 "data_size": 65536 00:16:34.481 }, 00:16:34.481 { 00:16:34.481 "name": "BaseBdev3", 00:16:34.481 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:34.481 "is_configured": true, 00:16:34.481 "data_offset": 0, 00:16:34.481 "data_size": 65536 00:16:34.481 } 00:16:34.481 ] 00:16:34.481 }' 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.481 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.048 [2024-11-04 14:49:04.742026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.048 BaseBdev1 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.048 [ 00:16:35.048 { 00:16:35.048 "name": "BaseBdev1", 00:16:35.048 "aliases": [ 00:16:35.048 "0681abe1-5470-4edb-ac07-46c83f008199" 00:16:35.048 ], 00:16:35.048 "product_name": "Malloc disk", 00:16:35.048 "block_size": 512, 00:16:35.048 "num_blocks": 65536, 00:16:35.048 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:35.048 "assigned_rate_limits": { 00:16:35.048 "rw_ios_per_sec": 0, 00:16:35.048 "rw_mbytes_per_sec": 0, 00:16:35.048 "r_mbytes_per_sec": 0, 00:16:35.048 "w_mbytes_per_sec": 0 00:16:35.048 }, 00:16:35.048 "claimed": true, 00:16:35.048 "claim_type": "exclusive_write", 00:16:35.048 "zoned": false, 00:16:35.048 "supported_io_types": { 00:16:35.048 "read": true, 00:16:35.048 "write": true, 00:16:35.048 "unmap": true, 00:16:35.048 "flush": true, 00:16:35.048 "reset": true, 00:16:35.048 "nvme_admin": false, 00:16:35.048 "nvme_io": false, 00:16:35.048 "nvme_io_md": false, 00:16:35.048 "write_zeroes": true, 00:16:35.048 "zcopy": true, 00:16:35.048 "get_zone_info": false, 00:16:35.048 "zone_management": false, 00:16:35.048 "zone_append": false, 00:16:35.048 "compare": false, 00:16:35.048 "compare_and_write": false, 00:16:35.048 "abort": true, 00:16:35.048 "seek_hole": false, 00:16:35.048 "seek_data": false, 00:16:35.048 "copy": true, 00:16:35.048 "nvme_iov_md": false 00:16:35.048 }, 00:16:35.048 "memory_domains": [ 00:16:35.048 { 00:16:35.048 "dma_device_id": "system", 00:16:35.048 "dma_device_type": 1 00:16:35.048 }, 00:16:35.048 { 00:16:35.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.048 "dma_device_type": 2 00:16:35.048 } 00:16:35.048 ], 00:16:35.048 "driver_specific": {} 00:16:35.048 } 00:16:35.048 ] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.048 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.049 "name": "Existed_Raid", 00:16:35.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.049 "strip_size_kb": 0, 00:16:35.049 "state": "configuring", 00:16:35.049 "raid_level": "raid1", 00:16:35.049 "superblock": false, 00:16:35.049 "num_base_bdevs": 3, 00:16:35.049 "num_base_bdevs_discovered": 2, 00:16:35.049 "num_base_bdevs_operational": 3, 00:16:35.049 "base_bdevs_list": [ 00:16:35.049 { 00:16:35.049 "name": "BaseBdev1", 00:16:35.049 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:35.049 "is_configured": true, 00:16:35.049 "data_offset": 0, 00:16:35.049 "data_size": 65536 00:16:35.049 }, 00:16:35.049 { 00:16:35.049 "name": null, 00:16:35.049 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:35.049 "is_configured": false, 00:16:35.049 "data_offset": 0, 00:16:35.049 "data_size": 65536 00:16:35.049 }, 00:16:35.049 { 00:16:35.049 "name": "BaseBdev3", 00:16:35.049 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:35.049 "is_configured": true, 00:16:35.049 "data_offset": 0, 00:16:35.049 "data_size": 65536 00:16:35.049 } 00:16:35.049 ] 00:16:35.049 }' 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.049 14:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.616 [2024-11-04 14:49:05.322348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.616 "name": "Existed_Raid", 00:16:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.616 "strip_size_kb": 0, 00:16:35.616 "state": "configuring", 00:16:35.616 "raid_level": "raid1", 00:16:35.616 "superblock": false, 00:16:35.616 "num_base_bdevs": 3, 00:16:35.616 "num_base_bdevs_discovered": 1, 00:16:35.616 "num_base_bdevs_operational": 3, 00:16:35.616 "base_bdevs_list": [ 00:16:35.616 { 00:16:35.616 "name": "BaseBdev1", 00:16:35.616 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:35.616 "is_configured": true, 00:16:35.616 "data_offset": 0, 00:16:35.616 "data_size": 65536 00:16:35.616 }, 00:16:35.616 { 00:16:35.616 "name": null, 00:16:35.616 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:35.616 "is_configured": false, 00:16:35.616 "data_offset": 0, 00:16:35.616 "data_size": 65536 00:16:35.616 }, 00:16:35.616 { 00:16:35.616 "name": null, 00:16:35.616 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:35.616 "is_configured": false, 00:16:35.616 "data_offset": 0, 00:16:35.616 "data_size": 65536 00:16:35.616 } 00:16:35.616 ] 00:16:35.616 }' 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.616 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 [2024-11-04 14:49:05.922595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.182 "name": "Existed_Raid", 00:16:36.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.182 "strip_size_kb": 0, 00:16:36.182 "state": "configuring", 00:16:36.182 "raid_level": "raid1", 00:16:36.182 "superblock": false, 00:16:36.182 "num_base_bdevs": 3, 00:16:36.182 "num_base_bdevs_discovered": 2, 00:16:36.182 "num_base_bdevs_operational": 3, 00:16:36.182 "base_bdevs_list": [ 00:16:36.182 { 00:16:36.182 "name": "BaseBdev1", 00:16:36.182 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:36.182 "is_configured": true, 00:16:36.182 "data_offset": 0, 00:16:36.182 "data_size": 65536 00:16:36.182 }, 00:16:36.182 { 00:16:36.182 "name": null, 00:16:36.182 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:36.182 "is_configured": false, 00:16:36.182 "data_offset": 0, 00:16:36.182 "data_size": 65536 00:16:36.182 }, 00:16:36.182 { 00:16:36.182 "name": "BaseBdev3", 00:16:36.182 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:36.182 "is_configured": true, 00:16:36.182 "data_offset": 0, 00:16:36.182 "data_size": 65536 00:16:36.182 } 00:16:36.182 ] 00:16:36.182 }' 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.182 14:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.749 [2024-11-04 14:49:06.530735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.749 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.007 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.007 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.007 "name": "Existed_Raid", 00:16:37.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.007 "strip_size_kb": 0, 00:16:37.007 "state": "configuring", 00:16:37.007 "raid_level": "raid1", 00:16:37.007 "superblock": false, 00:16:37.007 "num_base_bdevs": 3, 00:16:37.007 "num_base_bdevs_discovered": 1, 00:16:37.007 "num_base_bdevs_operational": 3, 00:16:37.007 "base_bdevs_list": [ 00:16:37.007 { 00:16:37.007 "name": null, 00:16:37.007 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:37.007 "is_configured": false, 00:16:37.007 "data_offset": 0, 00:16:37.007 "data_size": 65536 00:16:37.007 }, 00:16:37.007 { 00:16:37.007 "name": null, 00:16:37.007 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:37.007 "is_configured": false, 00:16:37.007 "data_offset": 0, 00:16:37.007 "data_size": 65536 00:16:37.007 }, 00:16:37.007 { 00:16:37.007 "name": "BaseBdev3", 00:16:37.007 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:37.007 "is_configured": true, 00:16:37.007 "data_offset": 0, 00:16:37.007 "data_size": 65536 00:16:37.007 } 00:16:37.007 ] 00:16:37.007 }' 00:16:37.007 14:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.007 14:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.265 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.265 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.265 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.265 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.265 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 [2024-11-04 14:49:07.189352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.524 "name": "Existed_Raid", 00:16:37.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.524 "strip_size_kb": 0, 00:16:37.524 "state": "configuring", 00:16:37.524 "raid_level": "raid1", 00:16:37.524 "superblock": false, 00:16:37.524 "num_base_bdevs": 3, 00:16:37.524 "num_base_bdevs_discovered": 2, 00:16:37.524 "num_base_bdevs_operational": 3, 00:16:37.524 "base_bdevs_list": [ 00:16:37.524 { 00:16:37.524 "name": null, 00:16:37.524 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:37.524 "is_configured": false, 00:16:37.524 "data_offset": 0, 00:16:37.524 "data_size": 65536 00:16:37.524 }, 00:16:37.524 { 00:16:37.524 "name": "BaseBdev2", 00:16:37.524 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:37.524 "is_configured": true, 00:16:37.524 "data_offset": 0, 00:16:37.524 "data_size": 65536 00:16:37.524 }, 00:16:37.524 { 00:16:37.524 "name": "BaseBdev3", 00:16:37.524 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:37.524 "is_configured": true, 00:16:37.524 "data_offset": 0, 00:16:37.524 "data_size": 65536 00:16:37.524 } 00:16:37.524 ] 00:16:37.524 }' 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.524 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0681abe1-5470-4edb-ac07-46c83f008199 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.092 [2024-11-04 14:49:07.859867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:38.092 [2024-11-04 14:49:07.859963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:38.092 [2024-11-04 14:49:07.859977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:38.092 [2024-11-04 14:49:07.860366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:38.092 [2024-11-04 14:49:07.860598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:38.092 [2024-11-04 14:49:07.860622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:38.092 [2024-11-04 14:49:07.860973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.092 NewBaseBdev 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:38.092 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.093 [ 00:16:38.093 { 00:16:38.093 "name": "NewBaseBdev", 00:16:38.093 "aliases": [ 00:16:38.093 "0681abe1-5470-4edb-ac07-46c83f008199" 00:16:38.093 ], 00:16:38.093 "product_name": "Malloc disk", 00:16:38.093 "block_size": 512, 00:16:38.093 "num_blocks": 65536, 00:16:38.093 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:38.093 "assigned_rate_limits": { 00:16:38.093 "rw_ios_per_sec": 0, 00:16:38.093 "rw_mbytes_per_sec": 0, 00:16:38.093 "r_mbytes_per_sec": 0, 00:16:38.093 "w_mbytes_per_sec": 0 00:16:38.093 }, 00:16:38.093 "claimed": true, 00:16:38.093 "claim_type": "exclusive_write", 00:16:38.093 "zoned": false, 00:16:38.093 "supported_io_types": { 00:16:38.093 "read": true, 00:16:38.093 "write": true, 00:16:38.093 "unmap": true, 00:16:38.093 "flush": true, 00:16:38.093 "reset": true, 00:16:38.093 "nvme_admin": false, 00:16:38.093 "nvme_io": false, 00:16:38.093 "nvme_io_md": false, 00:16:38.093 "write_zeroes": true, 00:16:38.093 "zcopy": true, 00:16:38.093 "get_zone_info": false, 00:16:38.093 "zone_management": false, 00:16:38.093 "zone_append": false, 00:16:38.093 "compare": false, 00:16:38.093 "compare_and_write": false, 00:16:38.093 "abort": true, 00:16:38.093 "seek_hole": false, 00:16:38.093 "seek_data": false, 00:16:38.093 "copy": true, 00:16:38.093 "nvme_iov_md": false 00:16:38.093 }, 00:16:38.093 "memory_domains": [ 00:16:38.093 { 00:16:38.093 "dma_device_id": "system", 00:16:38.093 "dma_device_type": 1 00:16:38.093 }, 00:16:38.093 { 00:16:38.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.093 "dma_device_type": 2 00:16:38.093 } 00:16:38.093 ], 00:16:38.093 "driver_specific": {} 00:16:38.093 } 00:16:38.093 ] 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.093 "name": "Existed_Raid", 00:16:38.093 "uuid": "e0a0d87b-83b6-4efb-a0c2-1b26ca6233b4", 00:16:38.093 "strip_size_kb": 0, 00:16:38.093 "state": "online", 00:16:38.093 "raid_level": "raid1", 00:16:38.093 "superblock": false, 00:16:38.093 "num_base_bdevs": 3, 00:16:38.093 "num_base_bdevs_discovered": 3, 00:16:38.093 "num_base_bdevs_operational": 3, 00:16:38.093 "base_bdevs_list": [ 00:16:38.093 { 00:16:38.093 "name": "NewBaseBdev", 00:16:38.093 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:38.093 "is_configured": true, 00:16:38.093 "data_offset": 0, 00:16:38.093 "data_size": 65536 00:16:38.093 }, 00:16:38.093 { 00:16:38.093 "name": "BaseBdev2", 00:16:38.093 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:38.093 "is_configured": true, 00:16:38.093 "data_offset": 0, 00:16:38.093 "data_size": 65536 00:16:38.093 }, 00:16:38.093 { 00:16:38.093 "name": "BaseBdev3", 00:16:38.093 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:38.093 "is_configured": true, 00:16:38.093 "data_offset": 0, 00:16:38.093 "data_size": 65536 00:16:38.093 } 00:16:38.093 ] 00:16:38.093 }' 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.093 14:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.720 [2024-11-04 14:49:08.404500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.720 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.720 "name": "Existed_Raid", 00:16:38.720 "aliases": [ 00:16:38.720 "e0a0d87b-83b6-4efb-a0c2-1b26ca6233b4" 00:16:38.720 ], 00:16:38.720 "product_name": "Raid Volume", 00:16:38.720 "block_size": 512, 00:16:38.720 "num_blocks": 65536, 00:16:38.720 "uuid": "e0a0d87b-83b6-4efb-a0c2-1b26ca6233b4", 00:16:38.720 "assigned_rate_limits": { 00:16:38.720 "rw_ios_per_sec": 0, 00:16:38.720 "rw_mbytes_per_sec": 0, 00:16:38.720 "r_mbytes_per_sec": 0, 00:16:38.720 "w_mbytes_per_sec": 0 00:16:38.720 }, 00:16:38.720 "claimed": false, 00:16:38.720 "zoned": false, 00:16:38.720 "supported_io_types": { 00:16:38.720 "read": true, 00:16:38.720 "write": true, 00:16:38.720 "unmap": false, 00:16:38.720 "flush": false, 00:16:38.720 "reset": true, 00:16:38.720 "nvme_admin": false, 00:16:38.720 "nvme_io": false, 00:16:38.720 "nvme_io_md": false, 00:16:38.720 "write_zeroes": true, 00:16:38.720 "zcopy": false, 00:16:38.720 "get_zone_info": false, 00:16:38.720 "zone_management": false, 00:16:38.720 "zone_append": false, 00:16:38.720 "compare": false, 00:16:38.720 "compare_and_write": false, 00:16:38.720 "abort": false, 00:16:38.720 "seek_hole": false, 00:16:38.720 "seek_data": false, 00:16:38.720 "copy": false, 00:16:38.720 "nvme_iov_md": false 00:16:38.720 }, 00:16:38.720 "memory_domains": [ 00:16:38.720 { 00:16:38.720 "dma_device_id": "system", 00:16:38.720 "dma_device_type": 1 00:16:38.720 }, 00:16:38.720 { 00:16:38.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.720 "dma_device_type": 2 00:16:38.720 }, 00:16:38.720 { 00:16:38.720 "dma_device_id": "system", 00:16:38.720 "dma_device_type": 1 00:16:38.720 }, 00:16:38.720 { 00:16:38.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.720 "dma_device_type": 2 00:16:38.720 }, 00:16:38.720 { 00:16:38.720 "dma_device_id": "system", 00:16:38.720 "dma_device_type": 1 00:16:38.720 }, 00:16:38.720 { 00:16:38.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.720 "dma_device_type": 2 00:16:38.720 } 00:16:38.720 ], 00:16:38.720 "driver_specific": { 00:16:38.720 "raid": { 00:16:38.720 "uuid": "e0a0d87b-83b6-4efb-a0c2-1b26ca6233b4", 00:16:38.720 "strip_size_kb": 0, 00:16:38.720 "state": "online", 00:16:38.720 "raid_level": "raid1", 00:16:38.720 "superblock": false, 00:16:38.720 "num_base_bdevs": 3, 00:16:38.720 "num_base_bdevs_discovered": 3, 00:16:38.720 "num_base_bdevs_operational": 3, 00:16:38.720 "base_bdevs_list": [ 00:16:38.720 { 00:16:38.720 "name": "NewBaseBdev", 00:16:38.720 "uuid": "0681abe1-5470-4edb-ac07-46c83f008199", 00:16:38.721 "is_configured": true, 00:16:38.721 "data_offset": 0, 00:16:38.721 "data_size": 65536 00:16:38.721 }, 00:16:38.721 { 00:16:38.721 "name": "BaseBdev2", 00:16:38.721 "uuid": "9ba7b1ec-2254-4ba3-a03d-53dcf84dc023", 00:16:38.721 "is_configured": true, 00:16:38.721 "data_offset": 0, 00:16:38.721 "data_size": 65536 00:16:38.721 }, 00:16:38.721 { 00:16:38.721 "name": "BaseBdev3", 00:16:38.721 "uuid": "75bcd87b-0c31-4445-a3e4-f09b0a3ad739", 00:16:38.721 "is_configured": true, 00:16:38.721 "data_offset": 0, 00:16:38.721 "data_size": 65536 00:16:38.721 } 00:16:38.721 ] 00:16:38.721 } 00:16:38.721 } 00:16:38.721 }' 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:38.721 BaseBdev2 00:16:38.721 BaseBdev3' 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.721 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.979 [2024-11-04 14:49:08.712124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.979 [2024-11-04 14:49:08.712170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.979 [2024-11-04 14:49:08.712306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.979 [2024-11-04 14:49:08.712744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.979 [2024-11-04 14:49:08.712902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67558 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67558 ']' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67558 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67558 00:16:38.979 killing process with pid 67558 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67558' 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67558 00:16:38.979 [2024-11-04 14:49:08.753320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.979 14:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67558 00:16:39.240 [2024-11-04 14:49:09.040397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:40.612 00:16:40.612 real 0m12.123s 00:16:40.612 user 0m19.912s 00:16:40.612 sys 0m1.723s 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.612 ************************************ 00:16:40.612 END TEST raid_state_function_test 00:16:40.612 ************************************ 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.612 14:49:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:40.612 14:49:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:40.612 14:49:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.612 14:49:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.612 ************************************ 00:16:40.612 START TEST raid_state_function_test_sb 00:16:40.612 ************************************ 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.612 Process raid pid: 68196 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68196 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68196' 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68196 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68196 ']' 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.612 14:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.612 [2024-11-04 14:49:10.385053] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:40.612 [2024-11-04 14:49:10.385271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.870 [2024-11-04 14:49:10.576480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.870 [2024-11-04 14:49:10.729107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.128 [2024-11-04 14:49:10.964995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.128 [2024-11-04 14:49:10.965079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.693 [2024-11-04 14:49:11.428314] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.693 [2024-11-04 14:49:11.428390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.693 [2024-11-04 14:49:11.428408] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.693 [2024-11-04 14:49:11.428427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.693 [2024-11-04 14:49:11.428438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.693 [2024-11-04 14:49:11.428453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.693 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.694 "name": "Existed_Raid", 00:16:41.694 "uuid": "98e5c3fb-9e1a-4a09-a5b2-97c963204ad5", 00:16:41.694 "strip_size_kb": 0, 00:16:41.694 "state": "configuring", 00:16:41.694 "raid_level": "raid1", 00:16:41.694 "superblock": true, 00:16:41.694 "num_base_bdevs": 3, 00:16:41.694 "num_base_bdevs_discovered": 0, 00:16:41.694 "num_base_bdevs_operational": 3, 00:16:41.694 "base_bdevs_list": [ 00:16:41.694 { 00:16:41.694 "name": "BaseBdev1", 00:16:41.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.694 "is_configured": false, 00:16:41.694 "data_offset": 0, 00:16:41.694 "data_size": 0 00:16:41.694 }, 00:16:41.694 { 00:16:41.694 "name": "BaseBdev2", 00:16:41.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.694 "is_configured": false, 00:16:41.694 "data_offset": 0, 00:16:41.694 "data_size": 0 00:16:41.694 }, 00:16:41.694 { 00:16:41.694 "name": "BaseBdev3", 00:16:41.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.694 "is_configured": false, 00:16:41.694 "data_offset": 0, 00:16:41.694 "data_size": 0 00:16:41.694 } 00:16:41.694 ] 00:16:41.694 }' 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.694 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 [2024-11-04 14:49:11.944376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.260 [2024-11-04 14:49:11.944427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 [2024-11-04 14:49:11.952326] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.260 [2024-11-04 14:49:11.952530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.260 [2024-11-04 14:49:11.952656] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.260 [2024-11-04 14:49:11.952815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.260 [2024-11-04 14:49:11.952938] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.260 [2024-11-04 14:49:11.953083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.260 14:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 [2024-11-04 14:49:12.002170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.260 BaseBdev1 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 [ 00:16:42.260 { 00:16:42.260 "name": "BaseBdev1", 00:16:42.260 "aliases": [ 00:16:42.260 "5160ce9b-3a72-4276-bdac-5d16c24c2148" 00:16:42.260 ], 00:16:42.260 "product_name": "Malloc disk", 00:16:42.260 "block_size": 512, 00:16:42.260 "num_blocks": 65536, 00:16:42.260 "uuid": "5160ce9b-3a72-4276-bdac-5d16c24c2148", 00:16:42.260 "assigned_rate_limits": { 00:16:42.260 "rw_ios_per_sec": 0, 00:16:42.260 "rw_mbytes_per_sec": 0, 00:16:42.260 "r_mbytes_per_sec": 0, 00:16:42.260 "w_mbytes_per_sec": 0 00:16:42.260 }, 00:16:42.260 "claimed": true, 00:16:42.260 "claim_type": "exclusive_write", 00:16:42.260 "zoned": false, 00:16:42.260 "supported_io_types": { 00:16:42.260 "read": true, 00:16:42.260 "write": true, 00:16:42.260 "unmap": true, 00:16:42.260 "flush": true, 00:16:42.260 "reset": true, 00:16:42.260 "nvme_admin": false, 00:16:42.260 "nvme_io": false, 00:16:42.260 "nvme_io_md": false, 00:16:42.260 "write_zeroes": true, 00:16:42.260 "zcopy": true, 00:16:42.260 "get_zone_info": false, 00:16:42.260 "zone_management": false, 00:16:42.260 "zone_append": false, 00:16:42.260 "compare": false, 00:16:42.260 "compare_and_write": false, 00:16:42.260 "abort": true, 00:16:42.260 "seek_hole": false, 00:16:42.260 "seek_data": false, 00:16:42.260 "copy": true, 00:16:42.260 "nvme_iov_md": false 00:16:42.260 }, 00:16:42.260 "memory_domains": [ 00:16:42.260 { 00:16:42.260 "dma_device_id": "system", 00:16:42.260 "dma_device_type": 1 00:16:42.260 }, 00:16:42.260 { 00:16:42.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.260 "dma_device_type": 2 00:16:42.260 } 00:16:42.260 ], 00:16:42.260 "driver_specific": {} 00:16:42.260 } 00:16:42.260 ] 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.260 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.260 "name": "Existed_Raid", 00:16:42.260 "uuid": "63788921-1cda-4077-8948-2ffbf5102c18", 00:16:42.260 "strip_size_kb": 0, 00:16:42.260 "state": "configuring", 00:16:42.260 "raid_level": "raid1", 00:16:42.260 "superblock": true, 00:16:42.260 "num_base_bdevs": 3, 00:16:42.260 "num_base_bdevs_discovered": 1, 00:16:42.260 "num_base_bdevs_operational": 3, 00:16:42.260 "base_bdevs_list": [ 00:16:42.260 { 00:16:42.260 "name": "BaseBdev1", 00:16:42.260 "uuid": "5160ce9b-3a72-4276-bdac-5d16c24c2148", 00:16:42.260 "is_configured": true, 00:16:42.260 "data_offset": 2048, 00:16:42.260 "data_size": 63488 00:16:42.260 }, 00:16:42.260 { 00:16:42.260 "name": "BaseBdev2", 00:16:42.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.260 "is_configured": false, 00:16:42.260 "data_offset": 0, 00:16:42.260 "data_size": 0 00:16:42.260 }, 00:16:42.260 { 00:16:42.260 "name": "BaseBdev3", 00:16:42.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.260 "is_configured": false, 00:16:42.260 "data_offset": 0, 00:16:42.260 "data_size": 0 00:16:42.260 } 00:16:42.260 ] 00:16:42.260 }' 00:16:42.261 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.261 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.827 [2024-11-04 14:49:12.542413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.827 [2024-11-04 14:49:12.542496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.827 [2024-11-04 14:49:12.550493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.827 [2024-11-04 14:49:12.553406] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.827 [2024-11-04 14:49:12.553479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.827 [2024-11-04 14:49:12.553499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.827 [2024-11-04 14:49:12.553515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.827 "name": "Existed_Raid", 00:16:42.827 "uuid": "b552703f-f0a4-49fd-9a6f-c38d840dd480", 00:16:42.827 "strip_size_kb": 0, 00:16:42.827 "state": "configuring", 00:16:42.827 "raid_level": "raid1", 00:16:42.827 "superblock": true, 00:16:42.827 "num_base_bdevs": 3, 00:16:42.827 "num_base_bdevs_discovered": 1, 00:16:42.827 "num_base_bdevs_operational": 3, 00:16:42.827 "base_bdevs_list": [ 00:16:42.827 { 00:16:42.827 "name": "BaseBdev1", 00:16:42.827 "uuid": "5160ce9b-3a72-4276-bdac-5d16c24c2148", 00:16:42.827 "is_configured": true, 00:16:42.827 "data_offset": 2048, 00:16:42.827 "data_size": 63488 00:16:42.827 }, 00:16:42.827 { 00:16:42.827 "name": "BaseBdev2", 00:16:42.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.827 "is_configured": false, 00:16:42.827 "data_offset": 0, 00:16:42.827 "data_size": 0 00:16:42.827 }, 00:16:42.827 { 00:16:42.827 "name": "BaseBdev3", 00:16:42.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.827 "is_configured": false, 00:16:42.827 "data_offset": 0, 00:16:42.827 "data_size": 0 00:16:42.827 } 00:16:42.827 ] 00:16:42.827 }' 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.827 14:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.393 [2024-11-04 14:49:13.113791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.393 BaseBdev2 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.393 [ 00:16:43.393 { 00:16:43.393 "name": "BaseBdev2", 00:16:43.393 "aliases": [ 00:16:43.393 "ee0b1897-c6d4-4e13-b412-6d88d8440b8f" 00:16:43.393 ], 00:16:43.393 "product_name": "Malloc disk", 00:16:43.393 "block_size": 512, 00:16:43.393 "num_blocks": 65536, 00:16:43.393 "uuid": "ee0b1897-c6d4-4e13-b412-6d88d8440b8f", 00:16:43.393 "assigned_rate_limits": { 00:16:43.393 "rw_ios_per_sec": 0, 00:16:43.393 "rw_mbytes_per_sec": 0, 00:16:43.393 "r_mbytes_per_sec": 0, 00:16:43.393 "w_mbytes_per_sec": 0 00:16:43.393 }, 00:16:43.393 "claimed": true, 00:16:43.393 "claim_type": "exclusive_write", 00:16:43.393 "zoned": false, 00:16:43.393 "supported_io_types": { 00:16:43.393 "read": true, 00:16:43.393 "write": true, 00:16:43.393 "unmap": true, 00:16:43.393 "flush": true, 00:16:43.393 "reset": true, 00:16:43.393 "nvme_admin": false, 00:16:43.393 "nvme_io": false, 00:16:43.393 "nvme_io_md": false, 00:16:43.393 "write_zeroes": true, 00:16:43.393 "zcopy": true, 00:16:43.393 "get_zone_info": false, 00:16:43.393 "zone_management": false, 00:16:43.393 "zone_append": false, 00:16:43.393 "compare": false, 00:16:43.393 "compare_and_write": false, 00:16:43.393 "abort": true, 00:16:43.393 "seek_hole": false, 00:16:43.393 "seek_data": false, 00:16:43.393 "copy": true, 00:16:43.393 "nvme_iov_md": false 00:16:43.393 }, 00:16:43.393 "memory_domains": [ 00:16:43.393 { 00:16:43.393 "dma_device_id": "system", 00:16:43.393 "dma_device_type": 1 00:16:43.393 }, 00:16:43.393 { 00:16:43.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.393 "dma_device_type": 2 00:16:43.393 } 00:16:43.393 ], 00:16:43.393 "driver_specific": {} 00:16:43.393 } 00:16:43.393 ] 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.393 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.393 "name": "Existed_Raid", 00:16:43.393 "uuid": "b552703f-f0a4-49fd-9a6f-c38d840dd480", 00:16:43.393 "strip_size_kb": 0, 00:16:43.393 "state": "configuring", 00:16:43.393 "raid_level": "raid1", 00:16:43.393 "superblock": true, 00:16:43.393 "num_base_bdevs": 3, 00:16:43.393 "num_base_bdevs_discovered": 2, 00:16:43.393 "num_base_bdevs_operational": 3, 00:16:43.393 "base_bdevs_list": [ 00:16:43.393 { 00:16:43.393 "name": "BaseBdev1", 00:16:43.393 "uuid": "5160ce9b-3a72-4276-bdac-5d16c24c2148", 00:16:43.393 "is_configured": true, 00:16:43.394 "data_offset": 2048, 00:16:43.394 "data_size": 63488 00:16:43.394 }, 00:16:43.394 { 00:16:43.394 "name": "BaseBdev2", 00:16:43.394 "uuid": "ee0b1897-c6d4-4e13-b412-6d88d8440b8f", 00:16:43.394 "is_configured": true, 00:16:43.394 "data_offset": 2048, 00:16:43.394 "data_size": 63488 00:16:43.394 }, 00:16:43.394 { 00:16:43.394 "name": "BaseBdev3", 00:16:43.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.394 "is_configured": false, 00:16:43.394 "data_offset": 0, 00:16:43.394 "data_size": 0 00:16:43.394 } 00:16:43.394 ] 00:16:43.394 }' 00:16:43.394 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.394 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.959 [2024-11-04 14:49:13.724322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.959 [2024-11-04 14:49:13.724703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:43.959 [2024-11-04 14:49:13.724737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:43.959 BaseBdev3 00:16:43.959 [2024-11-04 14:49:13.725099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:43.959 [2024-11-04 14:49:13.725332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:43.959 [2024-11-04 14:49:13.725350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:43.959 [2024-11-04 14:49:13.725561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.959 [ 00:16:43.959 { 00:16:43.959 "name": "BaseBdev3", 00:16:43.959 "aliases": [ 00:16:43.959 "a7063d23-cae2-4c65-a376-8da8e96ff7ae" 00:16:43.959 ], 00:16:43.959 "product_name": "Malloc disk", 00:16:43.959 "block_size": 512, 00:16:43.959 "num_blocks": 65536, 00:16:43.959 "uuid": "a7063d23-cae2-4c65-a376-8da8e96ff7ae", 00:16:43.959 "assigned_rate_limits": { 00:16:43.959 "rw_ios_per_sec": 0, 00:16:43.959 "rw_mbytes_per_sec": 0, 00:16:43.959 "r_mbytes_per_sec": 0, 00:16:43.959 "w_mbytes_per_sec": 0 00:16:43.959 }, 00:16:43.959 "claimed": true, 00:16:43.959 "claim_type": "exclusive_write", 00:16:43.959 "zoned": false, 00:16:43.959 "supported_io_types": { 00:16:43.959 "read": true, 00:16:43.959 "write": true, 00:16:43.959 "unmap": true, 00:16:43.959 "flush": true, 00:16:43.959 "reset": true, 00:16:43.959 "nvme_admin": false, 00:16:43.959 "nvme_io": false, 00:16:43.959 "nvme_io_md": false, 00:16:43.959 "write_zeroes": true, 00:16:43.959 "zcopy": true, 00:16:43.959 "get_zone_info": false, 00:16:43.959 "zone_management": false, 00:16:43.959 "zone_append": false, 00:16:43.959 "compare": false, 00:16:43.959 "compare_and_write": false, 00:16:43.959 "abort": true, 00:16:43.959 "seek_hole": false, 00:16:43.959 "seek_data": false, 00:16:43.959 "copy": true, 00:16:43.959 "nvme_iov_md": false 00:16:43.959 }, 00:16:43.959 "memory_domains": [ 00:16:43.959 { 00:16:43.959 "dma_device_id": "system", 00:16:43.959 "dma_device_type": 1 00:16:43.959 }, 00:16:43.959 { 00:16:43.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.959 "dma_device_type": 2 00:16:43.959 } 00:16:43.959 ], 00:16:43.959 "driver_specific": {} 00:16:43.959 } 00:16:43.959 ] 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.959 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.959 "name": "Existed_Raid", 00:16:43.959 "uuid": "b552703f-f0a4-49fd-9a6f-c38d840dd480", 00:16:43.959 "strip_size_kb": 0, 00:16:43.960 "state": "online", 00:16:43.960 "raid_level": "raid1", 00:16:43.960 "superblock": true, 00:16:43.960 "num_base_bdevs": 3, 00:16:43.960 "num_base_bdevs_discovered": 3, 00:16:43.960 "num_base_bdevs_operational": 3, 00:16:43.960 "base_bdevs_list": [ 00:16:43.960 { 00:16:43.960 "name": "BaseBdev1", 00:16:43.960 "uuid": "5160ce9b-3a72-4276-bdac-5d16c24c2148", 00:16:43.960 "is_configured": true, 00:16:43.960 "data_offset": 2048, 00:16:43.960 "data_size": 63488 00:16:43.960 }, 00:16:43.960 { 00:16:43.960 "name": "BaseBdev2", 00:16:43.960 "uuid": "ee0b1897-c6d4-4e13-b412-6d88d8440b8f", 00:16:43.960 "is_configured": true, 00:16:43.960 "data_offset": 2048, 00:16:43.960 "data_size": 63488 00:16:43.960 }, 00:16:43.960 { 00:16:43.960 "name": "BaseBdev3", 00:16:43.960 "uuid": "a7063d23-cae2-4c65-a376-8da8e96ff7ae", 00:16:43.960 "is_configured": true, 00:16:43.960 "data_offset": 2048, 00:16:43.960 "data_size": 63488 00:16:43.960 } 00:16:43.960 ] 00:16:43.960 }' 00:16:43.960 14:49:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.960 14:49:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.527 [2024-11-04 14:49:14.301015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.527 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.527 "name": "Existed_Raid", 00:16:44.527 "aliases": [ 00:16:44.527 "b552703f-f0a4-49fd-9a6f-c38d840dd480" 00:16:44.527 ], 00:16:44.527 "product_name": "Raid Volume", 00:16:44.527 "block_size": 512, 00:16:44.527 "num_blocks": 63488, 00:16:44.527 "uuid": "b552703f-f0a4-49fd-9a6f-c38d840dd480", 00:16:44.527 "assigned_rate_limits": { 00:16:44.527 "rw_ios_per_sec": 0, 00:16:44.527 "rw_mbytes_per_sec": 0, 00:16:44.527 "r_mbytes_per_sec": 0, 00:16:44.527 "w_mbytes_per_sec": 0 00:16:44.527 }, 00:16:44.527 "claimed": false, 00:16:44.527 "zoned": false, 00:16:44.527 "supported_io_types": { 00:16:44.527 "read": true, 00:16:44.527 "write": true, 00:16:44.527 "unmap": false, 00:16:44.527 "flush": false, 00:16:44.527 "reset": true, 00:16:44.527 "nvme_admin": false, 00:16:44.527 "nvme_io": false, 00:16:44.527 "nvme_io_md": false, 00:16:44.527 "write_zeroes": true, 00:16:44.527 "zcopy": false, 00:16:44.527 "get_zone_info": false, 00:16:44.527 "zone_management": false, 00:16:44.527 "zone_append": false, 00:16:44.527 "compare": false, 00:16:44.527 "compare_and_write": false, 00:16:44.527 "abort": false, 00:16:44.527 "seek_hole": false, 00:16:44.527 "seek_data": false, 00:16:44.527 "copy": false, 00:16:44.527 "nvme_iov_md": false 00:16:44.527 }, 00:16:44.528 "memory_domains": [ 00:16:44.528 { 00:16:44.528 "dma_device_id": "system", 00:16:44.528 "dma_device_type": 1 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.528 "dma_device_type": 2 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "dma_device_id": "system", 00:16:44.528 "dma_device_type": 1 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.528 "dma_device_type": 2 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "dma_device_id": "system", 00:16:44.528 "dma_device_type": 1 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.528 "dma_device_type": 2 00:16:44.528 } 00:16:44.528 ], 00:16:44.528 "driver_specific": { 00:16:44.528 "raid": { 00:16:44.528 "uuid": "b552703f-f0a4-49fd-9a6f-c38d840dd480", 00:16:44.528 "strip_size_kb": 0, 00:16:44.528 "state": "online", 00:16:44.528 "raid_level": "raid1", 00:16:44.528 "superblock": true, 00:16:44.528 "num_base_bdevs": 3, 00:16:44.528 "num_base_bdevs_discovered": 3, 00:16:44.528 "num_base_bdevs_operational": 3, 00:16:44.528 "base_bdevs_list": [ 00:16:44.528 { 00:16:44.528 "name": "BaseBdev1", 00:16:44.528 "uuid": "5160ce9b-3a72-4276-bdac-5d16c24c2148", 00:16:44.528 "is_configured": true, 00:16:44.528 "data_offset": 2048, 00:16:44.528 "data_size": 63488 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "name": "BaseBdev2", 00:16:44.528 "uuid": "ee0b1897-c6d4-4e13-b412-6d88d8440b8f", 00:16:44.528 "is_configured": true, 00:16:44.528 "data_offset": 2048, 00:16:44.528 "data_size": 63488 00:16:44.528 }, 00:16:44.528 { 00:16:44.528 "name": "BaseBdev3", 00:16:44.528 "uuid": "a7063d23-cae2-4c65-a376-8da8e96ff7ae", 00:16:44.528 "is_configured": true, 00:16:44.528 "data_offset": 2048, 00:16:44.528 "data_size": 63488 00:16:44.528 } 00:16:44.528 ] 00:16:44.528 } 00:16:44.528 } 00:16:44.528 }' 00:16:44.528 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.786 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:44.786 BaseBdev2 00:16:44.786 BaseBdev3' 00:16:44.786 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.786 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.786 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.786 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:44.786 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.787 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.787 [2024-11-04 14:49:14.636749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.045 "name": "Existed_Raid", 00:16:45.045 "uuid": "b552703f-f0a4-49fd-9a6f-c38d840dd480", 00:16:45.045 "strip_size_kb": 0, 00:16:45.045 "state": "online", 00:16:45.045 "raid_level": "raid1", 00:16:45.045 "superblock": true, 00:16:45.045 "num_base_bdevs": 3, 00:16:45.045 "num_base_bdevs_discovered": 2, 00:16:45.045 "num_base_bdevs_operational": 2, 00:16:45.045 "base_bdevs_list": [ 00:16:45.045 { 00:16:45.045 "name": null, 00:16:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.045 "is_configured": false, 00:16:45.045 "data_offset": 0, 00:16:45.045 "data_size": 63488 00:16:45.045 }, 00:16:45.045 { 00:16:45.045 "name": "BaseBdev2", 00:16:45.045 "uuid": "ee0b1897-c6d4-4e13-b412-6d88d8440b8f", 00:16:45.045 "is_configured": true, 00:16:45.045 "data_offset": 2048, 00:16:45.045 "data_size": 63488 00:16:45.045 }, 00:16:45.045 { 00:16:45.045 "name": "BaseBdev3", 00:16:45.045 "uuid": "a7063d23-cae2-4c65-a376-8da8e96ff7ae", 00:16:45.045 "is_configured": true, 00:16:45.045 "data_offset": 2048, 00:16:45.045 "data_size": 63488 00:16:45.045 } 00:16:45.045 ] 00:16:45.045 }' 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.045 14:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.611 [2024-11-04 14:49:15.309516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.611 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.611 [2024-11-04 14:49:15.461692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:45.611 [2024-11-04 14:49:15.461862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.870 [2024-11-04 14:49:15.559521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.870 [2024-11-04 14:49:15.559881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.870 [2024-11-04 14:49:15.560068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 BaseBdev2 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 [ 00:16:45.870 { 00:16:45.870 "name": "BaseBdev2", 00:16:45.870 "aliases": [ 00:16:45.870 "781591a0-a963-428c-8a43-f8448794d340" 00:16:45.870 ], 00:16:45.870 "product_name": "Malloc disk", 00:16:45.870 "block_size": 512, 00:16:45.870 "num_blocks": 65536, 00:16:45.870 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:45.870 "assigned_rate_limits": { 00:16:45.870 "rw_ios_per_sec": 0, 00:16:45.870 "rw_mbytes_per_sec": 0, 00:16:45.870 "r_mbytes_per_sec": 0, 00:16:45.870 "w_mbytes_per_sec": 0 00:16:45.870 }, 00:16:45.870 "claimed": false, 00:16:45.870 "zoned": false, 00:16:45.870 "supported_io_types": { 00:16:45.870 "read": true, 00:16:45.870 "write": true, 00:16:45.870 "unmap": true, 00:16:45.870 "flush": true, 00:16:45.870 "reset": true, 00:16:45.870 "nvme_admin": false, 00:16:45.870 "nvme_io": false, 00:16:45.870 "nvme_io_md": false, 00:16:45.870 "write_zeroes": true, 00:16:45.870 "zcopy": true, 00:16:45.870 "get_zone_info": false, 00:16:45.870 "zone_management": false, 00:16:45.870 "zone_append": false, 00:16:45.870 "compare": false, 00:16:45.870 "compare_and_write": false, 00:16:45.870 "abort": true, 00:16:45.870 "seek_hole": false, 00:16:45.870 "seek_data": false, 00:16:45.870 "copy": true, 00:16:45.870 "nvme_iov_md": false 00:16:45.870 }, 00:16:45.870 "memory_domains": [ 00:16:45.870 { 00:16:45.870 "dma_device_id": "system", 00:16:45.870 "dma_device_type": 1 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.870 "dma_device_type": 2 00:16:45.870 } 00:16:45.870 ], 00:16:45.870 "driver_specific": {} 00:16:45.870 } 00:16:45.870 ] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 BaseBdev3 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 [ 00:16:46.129 { 00:16:46.129 "name": "BaseBdev3", 00:16:46.129 "aliases": [ 00:16:46.129 "3902be67-4777-4df5-8a63-2a4a16ed3a16" 00:16:46.129 ], 00:16:46.129 "product_name": "Malloc disk", 00:16:46.129 "block_size": 512, 00:16:46.129 "num_blocks": 65536, 00:16:46.129 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:46.129 "assigned_rate_limits": { 00:16:46.129 "rw_ios_per_sec": 0, 00:16:46.129 "rw_mbytes_per_sec": 0, 00:16:46.129 "r_mbytes_per_sec": 0, 00:16:46.129 "w_mbytes_per_sec": 0 00:16:46.129 }, 00:16:46.129 "claimed": false, 00:16:46.129 "zoned": false, 00:16:46.129 "supported_io_types": { 00:16:46.129 "read": true, 00:16:46.129 "write": true, 00:16:46.129 "unmap": true, 00:16:46.129 "flush": true, 00:16:46.129 "reset": true, 00:16:46.129 "nvme_admin": false, 00:16:46.129 "nvme_io": false, 00:16:46.129 "nvme_io_md": false, 00:16:46.129 "write_zeroes": true, 00:16:46.129 "zcopy": true, 00:16:46.129 "get_zone_info": false, 00:16:46.129 "zone_management": false, 00:16:46.129 "zone_append": false, 00:16:46.129 "compare": false, 00:16:46.129 "compare_and_write": false, 00:16:46.129 "abort": true, 00:16:46.129 "seek_hole": false, 00:16:46.129 "seek_data": false, 00:16:46.129 "copy": true, 00:16:46.129 "nvme_iov_md": false 00:16:46.129 }, 00:16:46.129 "memory_domains": [ 00:16:46.129 { 00:16:46.129 "dma_device_id": "system", 00:16:46.129 "dma_device_type": 1 00:16:46.129 }, 00:16:46.129 { 00:16:46.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.129 "dma_device_type": 2 00:16:46.129 } 00:16:46.129 ], 00:16:46.129 "driver_specific": {} 00:16:46.129 } 00:16:46.129 ] 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 [2024-11-04 14:49:15.781939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.129 [2024-11-04 14:49:15.782013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.129 [2024-11-04 14:49:15.782053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.129 [2024-11-04 14:49:15.784875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.129 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.129 "name": "Existed_Raid", 00:16:46.129 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:46.129 "strip_size_kb": 0, 00:16:46.129 "state": "configuring", 00:16:46.129 "raid_level": "raid1", 00:16:46.129 "superblock": true, 00:16:46.129 "num_base_bdevs": 3, 00:16:46.129 "num_base_bdevs_discovered": 2, 00:16:46.129 "num_base_bdevs_operational": 3, 00:16:46.129 "base_bdevs_list": [ 00:16:46.129 { 00:16:46.129 "name": "BaseBdev1", 00:16:46.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.129 "is_configured": false, 00:16:46.129 "data_offset": 0, 00:16:46.129 "data_size": 0 00:16:46.129 }, 00:16:46.129 { 00:16:46.129 "name": "BaseBdev2", 00:16:46.129 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:46.129 "is_configured": true, 00:16:46.129 "data_offset": 2048, 00:16:46.129 "data_size": 63488 00:16:46.129 }, 00:16:46.129 { 00:16:46.129 "name": "BaseBdev3", 00:16:46.129 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:46.129 "is_configured": true, 00:16:46.130 "data_offset": 2048, 00:16:46.130 "data_size": 63488 00:16:46.130 } 00:16:46.130 ] 00:16:46.130 }' 00:16:46.130 14:49:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.130 14:49:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.695 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:46.695 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.696 [2024-11-04 14:49:16.350063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.696 "name": "Existed_Raid", 00:16:46.696 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:46.696 "strip_size_kb": 0, 00:16:46.696 "state": "configuring", 00:16:46.696 "raid_level": "raid1", 00:16:46.696 "superblock": true, 00:16:46.696 "num_base_bdevs": 3, 00:16:46.696 "num_base_bdevs_discovered": 1, 00:16:46.696 "num_base_bdevs_operational": 3, 00:16:46.696 "base_bdevs_list": [ 00:16:46.696 { 00:16:46.696 "name": "BaseBdev1", 00:16:46.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.696 "is_configured": false, 00:16:46.696 "data_offset": 0, 00:16:46.696 "data_size": 0 00:16:46.696 }, 00:16:46.696 { 00:16:46.696 "name": null, 00:16:46.696 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:46.696 "is_configured": false, 00:16:46.696 "data_offset": 0, 00:16:46.696 "data_size": 63488 00:16:46.696 }, 00:16:46.696 { 00:16:46.696 "name": "BaseBdev3", 00:16:46.696 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:46.696 "is_configured": true, 00:16:46.696 "data_offset": 2048, 00:16:46.696 "data_size": 63488 00:16:46.696 } 00:16:46.696 ] 00:16:46.696 }' 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.696 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 [2024-11-04 14:49:16.952307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.261 BaseBdev1 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 [ 00:16:47.261 { 00:16:47.261 "name": "BaseBdev1", 00:16:47.261 "aliases": [ 00:16:47.261 "15428f44-bf45-4602-8c7c-e9bab7b692fb" 00:16:47.261 ], 00:16:47.261 "product_name": "Malloc disk", 00:16:47.261 "block_size": 512, 00:16:47.261 "num_blocks": 65536, 00:16:47.261 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:47.261 "assigned_rate_limits": { 00:16:47.261 "rw_ios_per_sec": 0, 00:16:47.261 "rw_mbytes_per_sec": 0, 00:16:47.261 "r_mbytes_per_sec": 0, 00:16:47.261 "w_mbytes_per_sec": 0 00:16:47.261 }, 00:16:47.261 "claimed": true, 00:16:47.261 "claim_type": "exclusive_write", 00:16:47.261 "zoned": false, 00:16:47.261 "supported_io_types": { 00:16:47.261 "read": true, 00:16:47.261 "write": true, 00:16:47.261 "unmap": true, 00:16:47.261 "flush": true, 00:16:47.261 "reset": true, 00:16:47.261 "nvme_admin": false, 00:16:47.261 "nvme_io": false, 00:16:47.261 "nvme_io_md": false, 00:16:47.261 "write_zeroes": true, 00:16:47.261 "zcopy": true, 00:16:47.261 "get_zone_info": false, 00:16:47.261 "zone_management": false, 00:16:47.261 "zone_append": false, 00:16:47.261 "compare": false, 00:16:47.261 "compare_and_write": false, 00:16:47.261 "abort": true, 00:16:47.261 "seek_hole": false, 00:16:47.261 "seek_data": false, 00:16:47.261 "copy": true, 00:16:47.261 "nvme_iov_md": false 00:16:47.261 }, 00:16:47.261 "memory_domains": [ 00:16:47.261 { 00:16:47.261 "dma_device_id": "system", 00:16:47.261 "dma_device_type": 1 00:16:47.261 }, 00:16:47.261 { 00:16:47.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.261 "dma_device_type": 2 00:16:47.261 } 00:16:47.261 ], 00:16:47.261 "driver_specific": {} 00:16:47.261 } 00:16:47.261 ] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.261 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.262 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.262 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.262 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.262 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.262 14:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.262 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.262 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.262 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.262 "name": "Existed_Raid", 00:16:47.262 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:47.262 "strip_size_kb": 0, 00:16:47.262 "state": "configuring", 00:16:47.262 "raid_level": "raid1", 00:16:47.262 "superblock": true, 00:16:47.262 "num_base_bdevs": 3, 00:16:47.262 "num_base_bdevs_discovered": 2, 00:16:47.262 "num_base_bdevs_operational": 3, 00:16:47.262 "base_bdevs_list": [ 00:16:47.262 { 00:16:47.262 "name": "BaseBdev1", 00:16:47.262 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:47.262 "is_configured": true, 00:16:47.262 "data_offset": 2048, 00:16:47.262 "data_size": 63488 00:16:47.262 }, 00:16:47.262 { 00:16:47.262 "name": null, 00:16:47.262 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:47.262 "is_configured": false, 00:16:47.262 "data_offset": 0, 00:16:47.262 "data_size": 63488 00:16:47.262 }, 00:16:47.262 { 00:16:47.262 "name": "BaseBdev3", 00:16:47.262 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:47.262 "is_configured": true, 00:16:47.262 "data_offset": 2048, 00:16:47.262 "data_size": 63488 00:16:47.262 } 00:16:47.262 ] 00:16:47.262 }' 00:16:47.262 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.262 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 [2024-11-04 14:49:17.596549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.828 "name": "Existed_Raid", 00:16:47.828 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:47.828 "strip_size_kb": 0, 00:16:47.828 "state": "configuring", 00:16:47.828 "raid_level": "raid1", 00:16:47.828 "superblock": true, 00:16:47.828 "num_base_bdevs": 3, 00:16:47.828 "num_base_bdevs_discovered": 1, 00:16:47.828 "num_base_bdevs_operational": 3, 00:16:47.828 "base_bdevs_list": [ 00:16:47.828 { 00:16:47.828 "name": "BaseBdev1", 00:16:47.828 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:47.828 "is_configured": true, 00:16:47.828 "data_offset": 2048, 00:16:47.828 "data_size": 63488 00:16:47.828 }, 00:16:47.828 { 00:16:47.828 "name": null, 00:16:47.828 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:47.828 "is_configured": false, 00:16:47.828 "data_offset": 0, 00:16:47.828 "data_size": 63488 00:16:47.828 }, 00:16:47.828 { 00:16:47.828 "name": null, 00:16:47.828 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:47.828 "is_configured": false, 00:16:47.828 "data_offset": 0, 00:16:47.828 "data_size": 63488 00:16:47.828 } 00:16:47.828 ] 00:16:47.828 }' 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.828 14:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 [2024-11-04 14:49:18.192792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.395 "name": "Existed_Raid", 00:16:48.395 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:48.395 "strip_size_kb": 0, 00:16:48.395 "state": "configuring", 00:16:48.395 "raid_level": "raid1", 00:16:48.395 "superblock": true, 00:16:48.395 "num_base_bdevs": 3, 00:16:48.395 "num_base_bdevs_discovered": 2, 00:16:48.395 "num_base_bdevs_operational": 3, 00:16:48.395 "base_bdevs_list": [ 00:16:48.395 { 00:16:48.395 "name": "BaseBdev1", 00:16:48.395 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:48.395 "is_configured": true, 00:16:48.395 "data_offset": 2048, 00:16:48.395 "data_size": 63488 00:16:48.395 }, 00:16:48.395 { 00:16:48.395 "name": null, 00:16:48.395 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:48.395 "is_configured": false, 00:16:48.395 "data_offset": 0, 00:16:48.395 "data_size": 63488 00:16:48.395 }, 00:16:48.395 { 00:16:48.395 "name": "BaseBdev3", 00:16:48.395 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:48.395 "is_configured": true, 00:16:48.395 "data_offset": 2048, 00:16:48.395 "data_size": 63488 00:16:48.395 } 00:16:48.395 ] 00:16:48.395 }' 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.395 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.961 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.961 [2024-11-04 14:49:18.756911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.218 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.218 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:49.218 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.218 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.219 "name": "Existed_Raid", 00:16:49.219 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:49.219 "strip_size_kb": 0, 00:16:49.219 "state": "configuring", 00:16:49.219 "raid_level": "raid1", 00:16:49.219 "superblock": true, 00:16:49.219 "num_base_bdevs": 3, 00:16:49.219 "num_base_bdevs_discovered": 1, 00:16:49.219 "num_base_bdevs_operational": 3, 00:16:49.219 "base_bdevs_list": [ 00:16:49.219 { 00:16:49.219 "name": null, 00:16:49.219 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:49.219 "is_configured": false, 00:16:49.219 "data_offset": 0, 00:16:49.219 "data_size": 63488 00:16:49.219 }, 00:16:49.219 { 00:16:49.219 "name": null, 00:16:49.219 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:49.219 "is_configured": false, 00:16:49.219 "data_offset": 0, 00:16:49.219 "data_size": 63488 00:16:49.219 }, 00:16:49.219 { 00:16:49.219 "name": "BaseBdev3", 00:16:49.219 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:49.219 "is_configured": true, 00:16:49.219 "data_offset": 2048, 00:16:49.219 "data_size": 63488 00:16:49.219 } 00:16:49.219 ] 00:16:49.219 }' 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.219 14:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.476 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.476 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.476 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.476 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.743 [2024-11-04 14:49:19.419454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.743 "name": "Existed_Raid", 00:16:49.743 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:49.743 "strip_size_kb": 0, 00:16:49.743 "state": "configuring", 00:16:49.743 "raid_level": "raid1", 00:16:49.743 "superblock": true, 00:16:49.743 "num_base_bdevs": 3, 00:16:49.743 "num_base_bdevs_discovered": 2, 00:16:49.743 "num_base_bdevs_operational": 3, 00:16:49.743 "base_bdevs_list": [ 00:16:49.743 { 00:16:49.743 "name": null, 00:16:49.743 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:49.743 "is_configured": false, 00:16:49.743 "data_offset": 0, 00:16:49.743 "data_size": 63488 00:16:49.743 }, 00:16:49.743 { 00:16:49.743 "name": "BaseBdev2", 00:16:49.743 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:49.743 "is_configured": true, 00:16:49.743 "data_offset": 2048, 00:16:49.743 "data_size": 63488 00:16:49.743 }, 00:16:49.743 { 00:16:49.743 "name": "BaseBdev3", 00:16:49.743 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:49.743 "is_configured": true, 00:16:49.743 "data_offset": 2048, 00:16:49.743 "data_size": 63488 00:16:49.743 } 00:16:49.743 ] 00:16:49.743 }' 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.743 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 14:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15428f44-bf45-4602-8c7c-e9bab7b692fb 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.309 [2024-11-04 14:49:20.081986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.309 [2024-11-04 14:49:20.082375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:50.309 [2024-11-04 14:49:20.082396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:50.309 NewBaseBdev 00:16:50.309 [2024-11-04 14:49:20.082726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:50.309 [2024-11-04 14:49:20.082951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:50.309 [2024-11-04 14:49:20.082975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:50.309 [2024-11-04 14:49:20.083148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:50.309 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 [ 00:16:50.310 { 00:16:50.310 "name": "NewBaseBdev", 00:16:50.310 "aliases": [ 00:16:50.310 "15428f44-bf45-4602-8c7c-e9bab7b692fb" 00:16:50.310 ], 00:16:50.310 "product_name": "Malloc disk", 00:16:50.310 "block_size": 512, 00:16:50.310 "num_blocks": 65536, 00:16:50.310 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:50.310 "assigned_rate_limits": { 00:16:50.310 "rw_ios_per_sec": 0, 00:16:50.310 "rw_mbytes_per_sec": 0, 00:16:50.310 "r_mbytes_per_sec": 0, 00:16:50.310 "w_mbytes_per_sec": 0 00:16:50.310 }, 00:16:50.310 "claimed": true, 00:16:50.310 "claim_type": "exclusive_write", 00:16:50.310 "zoned": false, 00:16:50.310 "supported_io_types": { 00:16:50.310 "read": true, 00:16:50.310 "write": true, 00:16:50.310 "unmap": true, 00:16:50.310 "flush": true, 00:16:50.310 "reset": true, 00:16:50.310 "nvme_admin": false, 00:16:50.310 "nvme_io": false, 00:16:50.310 "nvme_io_md": false, 00:16:50.310 "write_zeroes": true, 00:16:50.310 "zcopy": true, 00:16:50.310 "get_zone_info": false, 00:16:50.310 "zone_management": false, 00:16:50.310 "zone_append": false, 00:16:50.310 "compare": false, 00:16:50.310 "compare_and_write": false, 00:16:50.310 "abort": true, 00:16:50.310 "seek_hole": false, 00:16:50.310 "seek_data": false, 00:16:50.310 "copy": true, 00:16:50.310 "nvme_iov_md": false 00:16:50.310 }, 00:16:50.310 "memory_domains": [ 00:16:50.310 { 00:16:50.310 "dma_device_id": "system", 00:16:50.310 "dma_device_type": 1 00:16:50.310 }, 00:16:50.310 { 00:16:50.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.310 "dma_device_type": 2 00:16:50.310 } 00:16:50.310 ], 00:16:50.310 "driver_specific": {} 00:16:50.310 } 00:16:50.310 ] 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.310 "name": "Existed_Raid", 00:16:50.310 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:50.310 "strip_size_kb": 0, 00:16:50.310 "state": "online", 00:16:50.310 "raid_level": "raid1", 00:16:50.310 "superblock": true, 00:16:50.310 "num_base_bdevs": 3, 00:16:50.310 "num_base_bdevs_discovered": 3, 00:16:50.310 "num_base_bdevs_operational": 3, 00:16:50.310 "base_bdevs_list": [ 00:16:50.310 { 00:16:50.310 "name": "NewBaseBdev", 00:16:50.310 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:50.310 "is_configured": true, 00:16:50.310 "data_offset": 2048, 00:16:50.310 "data_size": 63488 00:16:50.310 }, 00:16:50.310 { 00:16:50.310 "name": "BaseBdev2", 00:16:50.310 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:50.310 "is_configured": true, 00:16:50.310 "data_offset": 2048, 00:16:50.310 "data_size": 63488 00:16:50.310 }, 00:16:50.310 { 00:16:50.310 "name": "BaseBdev3", 00:16:50.310 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:50.310 "is_configured": true, 00:16:50.310 "data_offset": 2048, 00:16:50.310 "data_size": 63488 00:16:50.310 } 00:16:50.310 ] 00:16:50.310 }' 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.310 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.875 [2024-11-04 14:49:20.590642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.875 "name": "Existed_Raid", 00:16:50.875 "aliases": [ 00:16:50.875 "d6419481-c2de-4eb6-8a98-8679fe850761" 00:16:50.875 ], 00:16:50.875 "product_name": "Raid Volume", 00:16:50.875 "block_size": 512, 00:16:50.875 "num_blocks": 63488, 00:16:50.875 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:50.875 "assigned_rate_limits": { 00:16:50.875 "rw_ios_per_sec": 0, 00:16:50.875 "rw_mbytes_per_sec": 0, 00:16:50.875 "r_mbytes_per_sec": 0, 00:16:50.875 "w_mbytes_per_sec": 0 00:16:50.875 }, 00:16:50.875 "claimed": false, 00:16:50.875 "zoned": false, 00:16:50.875 "supported_io_types": { 00:16:50.875 "read": true, 00:16:50.875 "write": true, 00:16:50.875 "unmap": false, 00:16:50.875 "flush": false, 00:16:50.875 "reset": true, 00:16:50.875 "nvme_admin": false, 00:16:50.875 "nvme_io": false, 00:16:50.875 "nvme_io_md": false, 00:16:50.875 "write_zeroes": true, 00:16:50.875 "zcopy": false, 00:16:50.875 "get_zone_info": false, 00:16:50.875 "zone_management": false, 00:16:50.875 "zone_append": false, 00:16:50.875 "compare": false, 00:16:50.875 "compare_and_write": false, 00:16:50.875 "abort": false, 00:16:50.875 "seek_hole": false, 00:16:50.875 "seek_data": false, 00:16:50.875 "copy": false, 00:16:50.875 "nvme_iov_md": false 00:16:50.875 }, 00:16:50.875 "memory_domains": [ 00:16:50.875 { 00:16:50.875 "dma_device_id": "system", 00:16:50.875 "dma_device_type": 1 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.875 "dma_device_type": 2 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "dma_device_id": "system", 00:16:50.875 "dma_device_type": 1 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.875 "dma_device_type": 2 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "dma_device_id": "system", 00:16:50.875 "dma_device_type": 1 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.875 "dma_device_type": 2 00:16:50.875 } 00:16:50.875 ], 00:16:50.875 "driver_specific": { 00:16:50.875 "raid": { 00:16:50.875 "uuid": "d6419481-c2de-4eb6-8a98-8679fe850761", 00:16:50.875 "strip_size_kb": 0, 00:16:50.875 "state": "online", 00:16:50.875 "raid_level": "raid1", 00:16:50.875 "superblock": true, 00:16:50.875 "num_base_bdevs": 3, 00:16:50.875 "num_base_bdevs_discovered": 3, 00:16:50.875 "num_base_bdevs_operational": 3, 00:16:50.875 "base_bdevs_list": [ 00:16:50.875 { 00:16:50.875 "name": "NewBaseBdev", 00:16:50.875 "uuid": "15428f44-bf45-4602-8c7c-e9bab7b692fb", 00:16:50.875 "is_configured": true, 00:16:50.875 "data_offset": 2048, 00:16:50.875 "data_size": 63488 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "name": "BaseBdev2", 00:16:50.875 "uuid": "781591a0-a963-428c-8a43-f8448794d340", 00:16:50.875 "is_configured": true, 00:16:50.875 "data_offset": 2048, 00:16:50.875 "data_size": 63488 00:16:50.875 }, 00:16:50.875 { 00:16:50.875 "name": "BaseBdev3", 00:16:50.875 "uuid": "3902be67-4777-4df5-8a63-2a4a16ed3a16", 00:16:50.875 "is_configured": true, 00:16:50.875 "data_offset": 2048, 00:16:50.875 "data_size": 63488 00:16:50.875 } 00:16:50.875 ] 00:16:50.875 } 00:16:50.875 } 00:16:50.875 }' 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:50.875 BaseBdev2 00:16:50.875 BaseBdev3' 00:16:50.875 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.876 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.133 [2024-11-04 14:49:20.890312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.133 [2024-11-04 14:49:20.890492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.133 [2024-11-04 14:49:20.890627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.133 [2024-11-04 14:49:20.891039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.133 [2024-11-04 14:49:20.891058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68196 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68196 ']' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68196 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68196 00:16:51.133 killing process with pid 68196 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68196' 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68196 00:16:51.133 [2024-11-04 14:49:20.927684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.133 14:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68196 00:16:51.390 [2024-11-04 14:49:21.215270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.763 14:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:52.763 00:16:52.763 real 0m12.092s 00:16:52.763 user 0m19.777s 00:16:52.763 sys 0m1.823s 00:16:52.763 14:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:52.763 ************************************ 00:16:52.763 END TEST raid_state_function_test_sb 00:16:52.763 ************************************ 00:16:52.763 14:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.763 14:49:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:52.763 14:49:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:52.763 14:49:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:52.763 14:49:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.763 ************************************ 00:16:52.763 START TEST raid_superblock_test 00:16:52.763 ************************************ 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68833 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68833 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68833 ']' 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:52.763 14:49:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.763 [2024-11-04 14:49:22.536789] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:16:52.763 [2024-11-04 14:49:22.537266] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68833 ] 00:16:53.029 [2024-11-04 14:49:22.731791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.029 [2024-11-04 14:49:22.906486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.321 [2024-11-04 14:49:23.144895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.321 [2024-11-04 14:49:23.145220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 malloc1 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 [2024-11-04 14:49:23.577195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.887 [2024-11-04 14:49:23.577286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.887 [2024-11-04 14:49:23.577324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:53.887 [2024-11-04 14:49:23.577340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.887 [2024-11-04 14:49:23.580510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.887 [2024-11-04 14:49:23.580556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:53.887 pt1 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 malloc2 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 [2024-11-04 14:49:23.630861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.887 [2024-11-04 14:49:23.630930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.887 [2024-11-04 14:49:23.630965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:53.887 [2024-11-04 14:49:23.630980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.887 [2024-11-04 14:49:23.634002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.887 [2024-11-04 14:49:23.634043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.887 pt2 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 malloc3 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.887 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 [2024-11-04 14:49:23.708135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:53.887 [2024-11-04 14:49:23.708221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.887 [2024-11-04 14:49:23.708274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:53.887 [2024-11-04 14:49:23.708293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.887 [2024-11-04 14:49:23.711321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.887 [2024-11-04 14:49:23.711365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:53.887 pt3 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.888 [2024-11-04 14:49:23.716285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.888 [2024-11-04 14:49:23.718893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.888 [2024-11-04 14:49:23.719116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:53.888 [2024-11-04 14:49:23.719365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:53.888 [2024-11-04 14:49:23.719393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:53.888 [2024-11-04 14:49:23.719713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:53.888 [2024-11-04 14:49:23.719953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:53.888 [2024-11-04 14:49:23.719974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:53.888 [2024-11-04 14:49:23.720205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.888 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.146 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.146 "name": "raid_bdev1", 00:16:54.146 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:54.146 "strip_size_kb": 0, 00:16:54.146 "state": "online", 00:16:54.146 "raid_level": "raid1", 00:16:54.146 "superblock": true, 00:16:54.146 "num_base_bdevs": 3, 00:16:54.146 "num_base_bdevs_discovered": 3, 00:16:54.146 "num_base_bdevs_operational": 3, 00:16:54.146 "base_bdevs_list": [ 00:16:54.146 { 00:16:54.146 "name": "pt1", 00:16:54.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.146 "is_configured": true, 00:16:54.146 "data_offset": 2048, 00:16:54.146 "data_size": 63488 00:16:54.146 }, 00:16:54.146 { 00:16:54.146 "name": "pt2", 00:16:54.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.146 "is_configured": true, 00:16:54.146 "data_offset": 2048, 00:16:54.146 "data_size": 63488 00:16:54.146 }, 00:16:54.146 { 00:16:54.146 "name": "pt3", 00:16:54.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.146 "is_configured": true, 00:16:54.146 "data_offset": 2048, 00:16:54.146 "data_size": 63488 00:16:54.146 } 00:16:54.146 ] 00:16:54.146 }' 00:16:54.146 14:49:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.146 14:49:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.404 [2024-11-04 14:49:24.256936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.404 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.661 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.661 "name": "raid_bdev1", 00:16:54.661 "aliases": [ 00:16:54.661 "f05282a5-a284-4d21-8aec-edd189382f8a" 00:16:54.661 ], 00:16:54.661 "product_name": "Raid Volume", 00:16:54.661 "block_size": 512, 00:16:54.661 "num_blocks": 63488, 00:16:54.661 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:54.661 "assigned_rate_limits": { 00:16:54.661 "rw_ios_per_sec": 0, 00:16:54.661 "rw_mbytes_per_sec": 0, 00:16:54.661 "r_mbytes_per_sec": 0, 00:16:54.661 "w_mbytes_per_sec": 0 00:16:54.661 }, 00:16:54.661 "claimed": false, 00:16:54.661 "zoned": false, 00:16:54.661 "supported_io_types": { 00:16:54.661 "read": true, 00:16:54.661 "write": true, 00:16:54.661 "unmap": false, 00:16:54.661 "flush": false, 00:16:54.661 "reset": true, 00:16:54.661 "nvme_admin": false, 00:16:54.661 "nvme_io": false, 00:16:54.661 "nvme_io_md": false, 00:16:54.661 "write_zeroes": true, 00:16:54.661 "zcopy": false, 00:16:54.661 "get_zone_info": false, 00:16:54.661 "zone_management": false, 00:16:54.661 "zone_append": false, 00:16:54.661 "compare": false, 00:16:54.661 "compare_and_write": false, 00:16:54.661 "abort": false, 00:16:54.661 "seek_hole": false, 00:16:54.661 "seek_data": false, 00:16:54.661 "copy": false, 00:16:54.661 "nvme_iov_md": false 00:16:54.661 }, 00:16:54.661 "memory_domains": [ 00:16:54.661 { 00:16:54.661 "dma_device_id": "system", 00:16:54.661 "dma_device_type": 1 00:16:54.661 }, 00:16:54.661 { 00:16:54.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.661 "dma_device_type": 2 00:16:54.661 }, 00:16:54.661 { 00:16:54.661 "dma_device_id": "system", 00:16:54.661 "dma_device_type": 1 00:16:54.661 }, 00:16:54.661 { 00:16:54.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.661 "dma_device_type": 2 00:16:54.661 }, 00:16:54.661 { 00:16:54.661 "dma_device_id": "system", 00:16:54.661 "dma_device_type": 1 00:16:54.661 }, 00:16:54.661 { 00:16:54.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.661 "dma_device_type": 2 00:16:54.661 } 00:16:54.661 ], 00:16:54.661 "driver_specific": { 00:16:54.661 "raid": { 00:16:54.661 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:54.661 "strip_size_kb": 0, 00:16:54.661 "state": "online", 00:16:54.661 "raid_level": "raid1", 00:16:54.661 "superblock": true, 00:16:54.661 "num_base_bdevs": 3, 00:16:54.661 "num_base_bdevs_discovered": 3, 00:16:54.661 "num_base_bdevs_operational": 3, 00:16:54.661 "base_bdevs_list": [ 00:16:54.661 { 00:16:54.661 "name": "pt1", 00:16:54.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.662 "is_configured": true, 00:16:54.662 "data_offset": 2048, 00:16:54.662 "data_size": 63488 00:16:54.662 }, 00:16:54.662 { 00:16:54.662 "name": "pt2", 00:16:54.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.662 "is_configured": true, 00:16:54.662 "data_offset": 2048, 00:16:54.662 "data_size": 63488 00:16:54.662 }, 00:16:54.662 { 00:16:54.662 "name": "pt3", 00:16:54.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.662 "is_configured": true, 00:16:54.662 "data_offset": 2048, 00:16:54.662 "data_size": 63488 00:16:54.662 } 00:16:54.662 ] 00:16:54.662 } 00:16:54.662 } 00:16:54.662 }' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:54.662 pt2 00:16:54.662 pt3' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.662 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 [2024-11-04 14:49:24.588859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f05282a5-a284-4d21-8aec-edd189382f8a 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f05282a5-a284-4d21-8aec-edd189382f8a ']' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 [2024-11-04 14:49:24.644538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.920 [2024-11-04 14:49:24.644570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.920 [2024-11-04 14:49:24.644692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.920 [2024-11-04 14:49:24.644793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.920 [2024-11-04 14:49:24.644808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 [2024-11-04 14:49:24.796684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:54.920 [2024-11-04 14:49:24.799460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:54.920 [2024-11-04 14:49:24.799536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:54.920 [2024-11-04 14:49:24.799627] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:54.920 [2024-11-04 14:49:24.799706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:54.920 [2024-11-04 14:49:24.799739] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:54.920 [2024-11-04 14:49:24.799765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.920 [2024-11-04 14:49:24.799779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:54.920 request: 00:16:54.920 { 00:16:54.920 "name": "raid_bdev1", 00:16:54.920 "raid_level": "raid1", 00:16:54.920 "base_bdevs": [ 00:16:54.920 "malloc1", 00:16:54.920 "malloc2", 00:16:54.920 "malloc3" 00:16:54.920 ], 00:16:54.920 "superblock": false, 00:16:54.920 "method": "bdev_raid_create", 00:16:54.920 "req_id": 1 00:16:54.920 } 00:16:54.920 Got JSON-RPC error response 00:16:54.920 response: 00:16:54.920 { 00:16:54.920 "code": -17, 00:16:54.920 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:54.920 } 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.920 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.179 [2024-11-04 14:49:24.856624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.179 [2024-11-04 14:49:24.856847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.179 [2024-11-04 14:49:24.856932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:55.179 [2024-11-04 14:49:24.857134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.179 [2024-11-04 14:49:24.860313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.179 [2024-11-04 14:49:24.860460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.179 [2024-11-04 14:49:24.860667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:55.179 [2024-11-04 14:49:24.860838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.179 pt1 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.179 "name": "raid_bdev1", 00:16:55.179 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:55.179 "strip_size_kb": 0, 00:16:55.179 "state": "configuring", 00:16:55.179 "raid_level": "raid1", 00:16:55.179 "superblock": true, 00:16:55.179 "num_base_bdevs": 3, 00:16:55.179 "num_base_bdevs_discovered": 1, 00:16:55.179 "num_base_bdevs_operational": 3, 00:16:55.179 "base_bdevs_list": [ 00:16:55.179 { 00:16:55.179 "name": "pt1", 00:16:55.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.179 "is_configured": true, 00:16:55.179 "data_offset": 2048, 00:16:55.179 "data_size": 63488 00:16:55.179 }, 00:16:55.179 { 00:16:55.179 "name": null, 00:16:55.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.179 "is_configured": false, 00:16:55.179 "data_offset": 2048, 00:16:55.179 "data_size": 63488 00:16:55.179 }, 00:16:55.179 { 00:16:55.179 "name": null, 00:16:55.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.179 "is_configured": false, 00:16:55.179 "data_offset": 2048, 00:16:55.179 "data_size": 63488 00:16:55.179 } 00:16:55.179 ] 00:16:55.179 }' 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.179 14:49:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.744 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:55.744 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.744 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.744 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.744 [2024-11-04 14:49:25.384970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.744 [2024-11-04 14:49:25.385057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.744 [2024-11-04 14:49:25.385096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:55.744 [2024-11-04 14:49:25.385112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.745 [2024-11-04 14:49:25.385769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.745 [2024-11-04 14:49:25.385801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.745 [2024-11-04 14:49:25.385922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:55.745 [2024-11-04 14:49:25.385983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.745 pt2 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.745 [2024-11-04 14:49:25.392897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.745 "name": "raid_bdev1", 00:16:55.745 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:55.745 "strip_size_kb": 0, 00:16:55.745 "state": "configuring", 00:16:55.745 "raid_level": "raid1", 00:16:55.745 "superblock": true, 00:16:55.745 "num_base_bdevs": 3, 00:16:55.745 "num_base_bdevs_discovered": 1, 00:16:55.745 "num_base_bdevs_operational": 3, 00:16:55.745 "base_bdevs_list": [ 00:16:55.745 { 00:16:55.745 "name": "pt1", 00:16:55.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.745 "is_configured": true, 00:16:55.745 "data_offset": 2048, 00:16:55.745 "data_size": 63488 00:16:55.745 }, 00:16:55.745 { 00:16:55.745 "name": null, 00:16:55.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.745 "is_configured": false, 00:16:55.745 "data_offset": 0, 00:16:55.745 "data_size": 63488 00:16:55.745 }, 00:16:55.745 { 00:16:55.745 "name": null, 00:16:55.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.745 "is_configured": false, 00:16:55.745 "data_offset": 2048, 00:16:55.745 "data_size": 63488 00:16:55.745 } 00:16:55.745 ] 00:16:55.745 }' 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.745 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 [2024-11-04 14:49:25.925106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.322 [2024-11-04 14:49:25.925205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.322 [2024-11-04 14:49:25.925251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:56.322 [2024-11-04 14:49:25.925273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.322 [2024-11-04 14:49:25.926368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.322 [2024-11-04 14:49:25.926412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.322 [2024-11-04 14:49:25.926536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:56.322 [2024-11-04 14:49:25.926597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.322 pt2 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 [2024-11-04 14:49:25.933041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.322 [2024-11-04 14:49:25.933241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.322 [2024-11-04 14:49:25.933289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:56.322 [2024-11-04 14:49:25.933312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.322 [2024-11-04 14:49:25.933788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.322 [2024-11-04 14:49:25.933832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.322 [2024-11-04 14:49:25.933909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:56.322 [2024-11-04 14:49:25.933941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.322 [2024-11-04 14:49:25.934103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:56.322 [2024-11-04 14:49:25.934127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:56.322 [2024-11-04 14:49:25.934460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:56.322 [2024-11-04 14:49:25.934667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:56.322 [2024-11-04 14:49:25.934684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:56.322 [2024-11-04 14:49:25.934860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.322 pt3 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.322 "name": "raid_bdev1", 00:16:56.322 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:56.322 "strip_size_kb": 0, 00:16:56.322 "state": "online", 00:16:56.322 "raid_level": "raid1", 00:16:56.322 "superblock": true, 00:16:56.322 "num_base_bdevs": 3, 00:16:56.322 "num_base_bdevs_discovered": 3, 00:16:56.322 "num_base_bdevs_operational": 3, 00:16:56.322 "base_bdevs_list": [ 00:16:56.322 { 00:16:56.322 "name": "pt1", 00:16:56.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.322 "is_configured": true, 00:16:56.322 "data_offset": 2048, 00:16:56.322 "data_size": 63488 00:16:56.322 }, 00:16:56.322 { 00:16:56.322 "name": "pt2", 00:16:56.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.322 "is_configured": true, 00:16:56.322 "data_offset": 2048, 00:16:56.322 "data_size": 63488 00:16:56.322 }, 00:16:56.322 { 00:16:56.322 "name": "pt3", 00:16:56.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.322 "is_configured": true, 00:16:56.322 "data_offset": 2048, 00:16:56.322 "data_size": 63488 00:16:56.322 } 00:16:56.322 ] 00:16:56.322 }' 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.322 14:49:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.592 [2024-11-04 14:49:26.461712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.592 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.849 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.849 "name": "raid_bdev1", 00:16:56.849 "aliases": [ 00:16:56.849 "f05282a5-a284-4d21-8aec-edd189382f8a" 00:16:56.849 ], 00:16:56.849 "product_name": "Raid Volume", 00:16:56.849 "block_size": 512, 00:16:56.849 "num_blocks": 63488, 00:16:56.849 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:56.849 "assigned_rate_limits": { 00:16:56.849 "rw_ios_per_sec": 0, 00:16:56.849 "rw_mbytes_per_sec": 0, 00:16:56.849 "r_mbytes_per_sec": 0, 00:16:56.849 "w_mbytes_per_sec": 0 00:16:56.849 }, 00:16:56.849 "claimed": false, 00:16:56.849 "zoned": false, 00:16:56.849 "supported_io_types": { 00:16:56.849 "read": true, 00:16:56.849 "write": true, 00:16:56.850 "unmap": false, 00:16:56.850 "flush": false, 00:16:56.850 "reset": true, 00:16:56.850 "nvme_admin": false, 00:16:56.850 "nvme_io": false, 00:16:56.850 "nvme_io_md": false, 00:16:56.850 "write_zeroes": true, 00:16:56.850 "zcopy": false, 00:16:56.850 "get_zone_info": false, 00:16:56.850 "zone_management": false, 00:16:56.850 "zone_append": false, 00:16:56.850 "compare": false, 00:16:56.850 "compare_and_write": false, 00:16:56.850 "abort": false, 00:16:56.850 "seek_hole": false, 00:16:56.850 "seek_data": false, 00:16:56.850 "copy": false, 00:16:56.850 "nvme_iov_md": false 00:16:56.850 }, 00:16:56.850 "memory_domains": [ 00:16:56.850 { 00:16:56.850 "dma_device_id": "system", 00:16:56.850 "dma_device_type": 1 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.850 "dma_device_type": 2 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "dma_device_id": "system", 00:16:56.850 "dma_device_type": 1 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.850 "dma_device_type": 2 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "dma_device_id": "system", 00:16:56.850 "dma_device_type": 1 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.850 "dma_device_type": 2 00:16:56.850 } 00:16:56.850 ], 00:16:56.850 "driver_specific": { 00:16:56.850 "raid": { 00:16:56.850 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:56.850 "strip_size_kb": 0, 00:16:56.850 "state": "online", 00:16:56.850 "raid_level": "raid1", 00:16:56.850 "superblock": true, 00:16:56.850 "num_base_bdevs": 3, 00:16:56.850 "num_base_bdevs_discovered": 3, 00:16:56.850 "num_base_bdevs_operational": 3, 00:16:56.850 "base_bdevs_list": [ 00:16:56.850 { 00:16:56.850 "name": "pt1", 00:16:56.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.850 "is_configured": true, 00:16:56.850 "data_offset": 2048, 00:16:56.850 "data_size": 63488 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "name": "pt2", 00:16:56.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.850 "is_configured": true, 00:16:56.850 "data_offset": 2048, 00:16:56.850 "data_size": 63488 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "name": "pt3", 00:16:56.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.850 "is_configured": true, 00:16:56.850 "data_offset": 2048, 00:16:56.850 "data_size": 63488 00:16:56.850 } 00:16:56.850 ] 00:16:56.850 } 00:16:56.850 } 00:16:56.850 }' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:56.850 pt2 00:16:56.850 pt3' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.850 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.108 [2024-11-04 14:49:26.801704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f05282a5-a284-4d21-8aec-edd189382f8a '!=' f05282a5-a284-4d21-8aec-edd189382f8a ']' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.108 [2024-11-04 14:49:26.853385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.108 "name": "raid_bdev1", 00:16:57.108 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:57.108 "strip_size_kb": 0, 00:16:57.108 "state": "online", 00:16:57.108 "raid_level": "raid1", 00:16:57.108 "superblock": true, 00:16:57.108 "num_base_bdevs": 3, 00:16:57.108 "num_base_bdevs_discovered": 2, 00:16:57.108 "num_base_bdevs_operational": 2, 00:16:57.108 "base_bdevs_list": [ 00:16:57.108 { 00:16:57.108 "name": null, 00:16:57.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.108 "is_configured": false, 00:16:57.108 "data_offset": 0, 00:16:57.108 "data_size": 63488 00:16:57.108 }, 00:16:57.108 { 00:16:57.108 "name": "pt2", 00:16:57.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.108 "is_configured": true, 00:16:57.108 "data_offset": 2048, 00:16:57.108 "data_size": 63488 00:16:57.108 }, 00:16:57.108 { 00:16:57.108 "name": "pt3", 00:16:57.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.108 "is_configured": true, 00:16:57.108 "data_offset": 2048, 00:16:57.108 "data_size": 63488 00:16:57.108 } 00:16:57.108 ] 00:16:57.108 }' 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.108 14:49:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 [2024-11-04 14:49:27.421530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.674 [2024-11-04 14:49:27.421569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.674 [2024-11-04 14:49:27.421688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.674 [2024-11-04 14:49:27.421777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.674 [2024-11-04 14:49:27.421802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 [2024-11-04 14:49:27.501479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.674 [2024-11-04 14:49:27.501675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.674 [2024-11-04 14:49:27.501714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:57.674 [2024-11-04 14:49:27.501733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.674 [2024-11-04 14:49:27.504833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.674 [2024-11-04 14:49:27.505064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.674 [2024-11-04 14:49:27.505180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:57.674 [2024-11-04 14:49:27.505262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.674 pt2 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.674 "name": "raid_bdev1", 00:16:57.674 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:57.674 "strip_size_kb": 0, 00:16:57.674 "state": "configuring", 00:16:57.674 "raid_level": "raid1", 00:16:57.674 "superblock": true, 00:16:57.674 "num_base_bdevs": 3, 00:16:57.674 "num_base_bdevs_discovered": 1, 00:16:57.674 "num_base_bdevs_operational": 2, 00:16:57.674 "base_bdevs_list": [ 00:16:57.674 { 00:16:57.674 "name": null, 00:16:57.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.674 "is_configured": false, 00:16:57.674 "data_offset": 2048, 00:16:57.674 "data_size": 63488 00:16:57.674 }, 00:16:57.674 { 00:16:57.674 "name": "pt2", 00:16:57.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.674 "is_configured": true, 00:16:57.674 "data_offset": 2048, 00:16:57.674 "data_size": 63488 00:16:57.674 }, 00:16:57.674 { 00:16:57.674 "name": null, 00:16:57.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.674 "is_configured": false, 00:16:57.674 "data_offset": 2048, 00:16:57.674 "data_size": 63488 00:16:57.674 } 00:16:57.674 ] 00:16:57.674 }' 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.674 14:49:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.238 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:58.238 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:58.238 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.239 [2024-11-04 14:49:28.041692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:58.239 [2024-11-04 14:49:28.041786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.239 [2024-11-04 14:49:28.041821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:58.239 [2024-11-04 14:49:28.041840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.239 [2024-11-04 14:49:28.042506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.239 [2024-11-04 14:49:28.042538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:58.239 [2024-11-04 14:49:28.042670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:58.239 [2024-11-04 14:49:28.042721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:58.239 [2024-11-04 14:49:28.042883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:58.239 [2024-11-04 14:49:28.042905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:58.239 [2024-11-04 14:49:28.043264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:58.239 [2024-11-04 14:49:28.043473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:58.239 [2024-11-04 14:49:28.043488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:58.239 [2024-11-04 14:49:28.043668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.239 pt3 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.239 "name": "raid_bdev1", 00:16:58.239 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:58.239 "strip_size_kb": 0, 00:16:58.239 "state": "online", 00:16:58.239 "raid_level": "raid1", 00:16:58.239 "superblock": true, 00:16:58.239 "num_base_bdevs": 3, 00:16:58.239 "num_base_bdevs_discovered": 2, 00:16:58.239 "num_base_bdevs_operational": 2, 00:16:58.239 "base_bdevs_list": [ 00:16:58.239 { 00:16:58.239 "name": null, 00:16:58.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.239 "is_configured": false, 00:16:58.239 "data_offset": 2048, 00:16:58.239 "data_size": 63488 00:16:58.239 }, 00:16:58.239 { 00:16:58.239 "name": "pt2", 00:16:58.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.239 "is_configured": true, 00:16:58.239 "data_offset": 2048, 00:16:58.239 "data_size": 63488 00:16:58.239 }, 00:16:58.239 { 00:16:58.239 "name": "pt3", 00:16:58.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.239 "is_configured": true, 00:16:58.239 "data_offset": 2048, 00:16:58.239 "data_size": 63488 00:16:58.239 } 00:16:58.239 ] 00:16:58.239 }' 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.239 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.803 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.803 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.803 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.803 [2024-11-04 14:49:28.565801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.804 [2024-11-04 14:49:28.565850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.804 [2024-11-04 14:49:28.565981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.804 [2024-11-04 14:49:28.566069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.804 [2024-11-04 14:49:28.566084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.804 [2024-11-04 14:49:28.633778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.804 [2024-11-04 14:49:28.633873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.804 [2024-11-04 14:49:28.633907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:58.804 [2024-11-04 14:49:28.633937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.804 [2024-11-04 14:49:28.637211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.804 [2024-11-04 14:49:28.637284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.804 [2024-11-04 14:49:28.637399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.804 [2024-11-04 14:49:28.637471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.804 [2024-11-04 14:49:28.637641] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:58.804 [2024-11-04 14:49:28.637664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.804 [2024-11-04 14:49:28.637687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:58.804 [2024-11-04 14:49:28.637752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.804 pt1 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.804 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.061 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.061 "name": "raid_bdev1", 00:16:59.061 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:59.061 "strip_size_kb": 0, 00:16:59.061 "state": "configuring", 00:16:59.061 "raid_level": "raid1", 00:16:59.061 "superblock": true, 00:16:59.061 "num_base_bdevs": 3, 00:16:59.061 "num_base_bdevs_discovered": 1, 00:16:59.061 "num_base_bdevs_operational": 2, 00:16:59.061 "base_bdevs_list": [ 00:16:59.061 { 00:16:59.061 "name": null, 00:16:59.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.061 "is_configured": false, 00:16:59.061 "data_offset": 2048, 00:16:59.061 "data_size": 63488 00:16:59.061 }, 00:16:59.061 { 00:16:59.061 "name": "pt2", 00:16:59.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.061 "is_configured": true, 00:16:59.061 "data_offset": 2048, 00:16:59.061 "data_size": 63488 00:16:59.061 }, 00:16:59.061 { 00:16:59.061 "name": null, 00:16:59.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.061 "is_configured": false, 00:16:59.061 "data_offset": 2048, 00:16:59.061 "data_size": 63488 00:16:59.061 } 00:16:59.061 ] 00:16:59.061 }' 00:16:59.061 14:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.061 14:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.318 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.577 [2024-11-04 14:49:29.210170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.577 [2024-11-04 14:49:29.210267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.577 [2024-11-04 14:49:29.210306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:59.577 [2024-11-04 14:49:29.210322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.577 [2024-11-04 14:49:29.211006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.577 [2024-11-04 14:49:29.211038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.577 [2024-11-04 14:49:29.211154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:59.577 [2024-11-04 14:49:29.211217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.577 [2024-11-04 14:49:29.211408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:59.577 [2024-11-04 14:49:29.211424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.577 [2024-11-04 14:49:29.211774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:59.577 [2024-11-04 14:49:29.212008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:59.577 [2024-11-04 14:49:29.212030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:59.577 [2024-11-04 14:49:29.212205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.577 pt3 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.577 "name": "raid_bdev1", 00:16:59.577 "uuid": "f05282a5-a284-4d21-8aec-edd189382f8a", 00:16:59.577 "strip_size_kb": 0, 00:16:59.577 "state": "online", 00:16:59.577 "raid_level": "raid1", 00:16:59.577 "superblock": true, 00:16:59.577 "num_base_bdevs": 3, 00:16:59.577 "num_base_bdevs_discovered": 2, 00:16:59.577 "num_base_bdevs_operational": 2, 00:16:59.577 "base_bdevs_list": [ 00:16:59.577 { 00:16:59.577 "name": null, 00:16:59.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.577 "is_configured": false, 00:16:59.577 "data_offset": 2048, 00:16:59.577 "data_size": 63488 00:16:59.577 }, 00:16:59.577 { 00:16:59.577 "name": "pt2", 00:16:59.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.577 "is_configured": true, 00:16:59.577 "data_offset": 2048, 00:16:59.577 "data_size": 63488 00:16:59.577 }, 00:16:59.577 { 00:16:59.577 "name": "pt3", 00:16:59.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.577 "is_configured": true, 00:16:59.577 "data_offset": 2048, 00:16:59.577 "data_size": 63488 00:16:59.577 } 00:16:59.577 ] 00:16:59.577 }' 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.577 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.840 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:59.840 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:59.840 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.840 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.096 [2024-11-04 14:49:29.778688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f05282a5-a284-4d21-8aec-edd189382f8a '!=' f05282a5-a284-4d21-8aec-edd189382f8a ']' 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68833 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68833 ']' 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68833 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68833 00:17:00.096 killing process with pid 68833 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68833' 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68833 00:17:00.096 [2024-11-04 14:49:29.860696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.096 14:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68833 00:17:00.096 [2024-11-04 14:49:29.860818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.096 [2024-11-04 14:49:29.860906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.096 [2024-11-04 14:49:29.860926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:00.353 [2024-11-04 14:49:30.148667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.721 ************************************ 00:17:01.721 END TEST raid_superblock_test 00:17:01.721 ************************************ 00:17:01.721 14:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:01.721 00:17:01.721 real 0m8.872s 00:17:01.721 user 0m14.415s 00:17:01.721 sys 0m1.264s 00:17:01.721 14:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:01.721 14:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.721 14:49:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:17:01.721 14:49:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:01.721 14:49:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:01.721 14:49:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.721 ************************************ 00:17:01.721 START TEST raid_read_error_test 00:17:01.721 ************************************ 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.721 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I2fDqcod4m 00:17:01.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69291 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69291 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69291 ']' 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.722 14:49:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.722 [2024-11-04 14:49:31.475699] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:01.722 [2024-11-04 14:49:31.476095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69291 ] 00:17:01.979 [2024-11-04 14:49:31.667744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.979 [2024-11-04 14:49:31.814494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.236 [2024-11-04 14:49:32.052398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.236 [2024-11-04 14:49:32.052742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 BaseBdev1_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 true 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 [2024-11-04 14:49:32.526280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:02.802 [2024-11-04 14:49:32.526352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.802 [2024-11-04 14:49:32.526383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:02.802 [2024-11-04 14:49:32.526401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.802 [2024-11-04 14:49:32.529438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.802 [2024-11-04 14:49:32.529489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.802 BaseBdev1 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 BaseBdev2_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 true 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 [2024-11-04 14:49:32.590915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:02.802 [2024-11-04 14:49:32.590988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.802 [2024-11-04 14:49:32.591014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:02.802 [2024-11-04 14:49:32.591031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.802 [2024-11-04 14:49:32.594114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.802 [2024-11-04 14:49:32.594178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.802 BaseBdev2 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 BaseBdev3_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 true 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 [2024-11-04 14:49:32.674409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:02.802 [2024-11-04 14:49:32.675953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.802 [2024-11-04 14:49:32.675993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:02.802 [2024-11-04 14:49:32.676014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.802 [2024-11-04 14:49:32.679047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.802 [2024-11-04 14:49:32.679096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:02.802 BaseBdev3 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 [2024-11-04 14:49:32.683432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.802 [2024-11-04 14:49:32.686307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.802 [2024-11-04 14:49:32.686414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.802 [2024-11-04 14:49:32.686715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:02.802 [2024-11-04 14:49:32.686735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.802 [2024-11-04 14:49:32.687046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:02.802 [2024-11-04 14:49:32.687447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:02.802 [2024-11-04 14:49:32.687510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:02.802 [2024-11-04 14:49:32.687797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.802 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.068 "name": "raid_bdev1", 00:17:03.068 "uuid": "d24f0188-f24e-46f8-a928-2d010bafa065", 00:17:03.068 "strip_size_kb": 0, 00:17:03.068 "state": "online", 00:17:03.068 "raid_level": "raid1", 00:17:03.068 "superblock": true, 00:17:03.068 "num_base_bdevs": 3, 00:17:03.068 "num_base_bdevs_discovered": 3, 00:17:03.068 "num_base_bdevs_operational": 3, 00:17:03.068 "base_bdevs_list": [ 00:17:03.068 { 00:17:03.068 "name": "BaseBdev1", 00:17:03.068 "uuid": "1beab8e4-4004-55e3-8c2e-122a834ea7c2", 00:17:03.068 "is_configured": true, 00:17:03.068 "data_offset": 2048, 00:17:03.068 "data_size": 63488 00:17:03.068 }, 00:17:03.068 { 00:17:03.068 "name": "BaseBdev2", 00:17:03.068 "uuid": "79d4ee68-19ae-550b-a318-1d602224b9a8", 00:17:03.068 "is_configured": true, 00:17:03.068 "data_offset": 2048, 00:17:03.068 "data_size": 63488 00:17:03.068 }, 00:17:03.068 { 00:17:03.068 "name": "BaseBdev3", 00:17:03.068 "uuid": "bc6484d2-8164-51a9-9980-63f52f902b04", 00:17:03.068 "is_configured": true, 00:17:03.068 "data_offset": 2048, 00:17:03.068 "data_size": 63488 00:17:03.068 } 00:17:03.068 ] 00:17:03.068 }' 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.068 14:49:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.342 14:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:03.342 14:49:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:03.600 [2024-11-04 14:49:33.321555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.530 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.531 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.531 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.531 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.531 "name": "raid_bdev1", 00:17:04.531 "uuid": "d24f0188-f24e-46f8-a928-2d010bafa065", 00:17:04.531 "strip_size_kb": 0, 00:17:04.531 "state": "online", 00:17:04.531 "raid_level": "raid1", 00:17:04.531 "superblock": true, 00:17:04.531 "num_base_bdevs": 3, 00:17:04.531 "num_base_bdevs_discovered": 3, 00:17:04.531 "num_base_bdevs_operational": 3, 00:17:04.531 "base_bdevs_list": [ 00:17:04.531 { 00:17:04.531 "name": "BaseBdev1", 00:17:04.531 "uuid": "1beab8e4-4004-55e3-8c2e-122a834ea7c2", 00:17:04.531 "is_configured": true, 00:17:04.531 "data_offset": 2048, 00:17:04.531 "data_size": 63488 00:17:04.531 }, 00:17:04.531 { 00:17:04.531 "name": "BaseBdev2", 00:17:04.531 "uuid": "79d4ee68-19ae-550b-a318-1d602224b9a8", 00:17:04.531 "is_configured": true, 00:17:04.531 "data_offset": 2048, 00:17:04.531 "data_size": 63488 00:17:04.531 }, 00:17:04.531 { 00:17:04.531 "name": "BaseBdev3", 00:17:04.531 "uuid": "bc6484d2-8164-51a9-9980-63f52f902b04", 00:17:04.531 "is_configured": true, 00:17:04.531 "data_offset": 2048, 00:17:04.531 "data_size": 63488 00:17:04.531 } 00:17:04.531 ] 00:17:04.531 }' 00:17:04.531 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.531 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.096 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.096 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.096 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.096 [2024-11-04 14:49:34.726287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.096 [2024-11-04 14:49:34.726342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.096 [2024-11-04 14:49:34.730028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.096 [2024-11-04 14:49:34.730219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.097 [2024-11-04 14:49:34.730502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.097 [2024-11-04 14:49:34.730695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:05.097 { 00:17:05.097 "results": [ 00:17:05.097 { 00:17:05.097 "job": "raid_bdev1", 00:17:05.097 "core_mask": "0x1", 00:17:05.097 "workload": "randrw", 00:17:05.097 "percentage": 50, 00:17:05.097 "status": "finished", 00:17:05.097 "queue_depth": 1, 00:17:05.097 "io_size": 131072, 00:17:05.097 "runtime": 1.402018, 00:17:05.097 "iops": 8171.792373564391, 00:17:05.097 "mibps": 1021.4740466955489, 00:17:05.097 "io_failed": 0, 00:17:05.097 "io_timeout": 0, 00:17:05.097 "avg_latency_us": 118.11362073206534, 00:17:05.097 "min_latency_us": 42.123636363636365, 00:17:05.097 "max_latency_us": 2010.7636363636364 00:17:05.097 } 00:17:05.097 ], 00:17:05.097 "core_count": 1 00:17:05.097 } 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69291 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69291 ']' 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69291 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69291 00:17:05.097 killing process with pid 69291 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69291' 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69291 00:17:05.097 [2024-11-04 14:49:34.768137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.097 14:49:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69291 00:17:05.354 [2024-11-04 14:49:34.989662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I2fDqcod4m 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:06.728 ************************************ 00:17:06.728 END TEST raid_read_error_test 00:17:06.728 ************************************ 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:06.728 00:17:06.728 real 0m4.879s 00:17:06.728 user 0m5.948s 00:17:06.728 sys 0m0.645s 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:06.728 14:49:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.728 14:49:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:06.728 14:49:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:06.728 14:49:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:06.728 14:49:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.728 ************************************ 00:17:06.728 START TEST raid_write_error_test 00:17:06.728 ************************************ 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3JHHZzAw8P 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69435 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69435 00:17:06.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69435 ']' 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:06.728 14:49:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.728 [2024-11-04 14:49:36.411516] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:06.728 [2024-11-04 14:49:36.411978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69435 ] 00:17:06.728 [2024-11-04 14:49:36.605820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.987 [2024-11-04 14:49:36.759996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.244 [2024-11-04 14:49:37.002187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.244 [2024-11-04 14:49:37.002499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.502 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:07.502 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:07.503 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:07.503 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.503 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.503 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.760 BaseBdev1_malloc 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.760 true 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.760 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.760 [2024-11-04 14:49:37.408266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:07.760 [2024-11-04 14:49:37.408357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.760 [2024-11-04 14:49:37.408386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:07.760 [2024-11-04 14:49:37.408403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.760 [2024-11-04 14:49:37.411795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.761 [2024-11-04 14:49:37.411844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.761 BaseBdev1 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 BaseBdev2_malloc 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 true 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 [2024-11-04 14:49:37.473336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:07.761 [2024-11-04 14:49:37.473452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.761 [2024-11-04 14:49:37.473488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:07.761 [2024-11-04 14:49:37.473507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.761 [2024-11-04 14:49:37.476850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.761 [2024-11-04 14:49:37.476898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.761 BaseBdev2 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 BaseBdev3_malloc 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 true 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 [2024-11-04 14:49:37.556023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:07.761 [2024-11-04 14:49:37.556098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.761 [2024-11-04 14:49:37.556134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:07.761 [2024-11-04 14:49:37.556156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.761 [2024-11-04 14:49:37.559351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.761 [2024-11-04 14:49:37.559397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:07.761 BaseBdev3 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 [2024-11-04 14:49:37.564287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.761 [2024-11-04 14:49:37.567184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.761 [2024-11-04 14:49:37.567454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.761 [2024-11-04 14:49:37.567912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:07.761 [2024-11-04 14:49:37.568053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:07.761 [2024-11-04 14:49:37.568475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:07.761 [2024-11-04 14:49:37.568893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:07.761 [2024-11-04 14:49:37.569053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:07.761 [2024-11-04 14:49:37.569472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.761 "name": "raid_bdev1", 00:17:07.761 "uuid": "d10b846c-3811-4147-8288-67aa39b1a6b6", 00:17:07.761 "strip_size_kb": 0, 00:17:07.761 "state": "online", 00:17:07.761 "raid_level": "raid1", 00:17:07.761 "superblock": true, 00:17:07.761 "num_base_bdevs": 3, 00:17:07.761 "num_base_bdevs_discovered": 3, 00:17:07.761 "num_base_bdevs_operational": 3, 00:17:07.761 "base_bdevs_list": [ 00:17:07.761 { 00:17:07.761 "name": "BaseBdev1", 00:17:07.761 "uuid": "aa9b2d25-97e7-5717-8c2d-dbf19221e314", 00:17:07.761 "is_configured": true, 00:17:07.761 "data_offset": 2048, 00:17:07.761 "data_size": 63488 00:17:07.761 }, 00:17:07.761 { 00:17:07.761 "name": "BaseBdev2", 00:17:07.761 "uuid": "01a63e8e-cb82-5fd4-8a7d-110b54076129", 00:17:07.761 "is_configured": true, 00:17:07.761 "data_offset": 2048, 00:17:07.761 "data_size": 63488 00:17:07.761 }, 00:17:07.761 { 00:17:07.761 "name": "BaseBdev3", 00:17:07.761 "uuid": "6cd06a05-cd9b-53ea-9036-a831c42491ba", 00:17:07.761 "is_configured": true, 00:17:07.761 "data_offset": 2048, 00:17:07.761 "data_size": 63488 00:17:07.761 } 00:17:07.761 ] 00:17:07.761 }' 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.761 14:49:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.327 14:49:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:08.327 14:49:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:08.585 [2024-11-04 14:49:38.259339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.518 [2024-11-04 14:49:39.132318] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:09.518 [2024-11-04 14:49:39.132387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.518 [2024-11-04 14:49:39.132680] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.518 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.519 "name": "raid_bdev1", 00:17:09.519 "uuid": "d10b846c-3811-4147-8288-67aa39b1a6b6", 00:17:09.519 "strip_size_kb": 0, 00:17:09.519 "state": "online", 00:17:09.519 "raid_level": "raid1", 00:17:09.519 "superblock": true, 00:17:09.519 "num_base_bdevs": 3, 00:17:09.519 "num_base_bdevs_discovered": 2, 00:17:09.519 "num_base_bdevs_operational": 2, 00:17:09.519 "base_bdevs_list": [ 00:17:09.519 { 00:17:09.519 "name": null, 00:17:09.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.519 "is_configured": false, 00:17:09.519 "data_offset": 0, 00:17:09.519 "data_size": 63488 00:17:09.519 }, 00:17:09.519 { 00:17:09.519 "name": "BaseBdev2", 00:17:09.519 "uuid": "01a63e8e-cb82-5fd4-8a7d-110b54076129", 00:17:09.519 "is_configured": true, 00:17:09.519 "data_offset": 2048, 00:17:09.519 "data_size": 63488 00:17:09.519 }, 00:17:09.519 { 00:17:09.519 "name": "BaseBdev3", 00:17:09.519 "uuid": "6cd06a05-cd9b-53ea-9036-a831c42491ba", 00:17:09.519 "is_configured": true, 00:17:09.519 "data_offset": 2048, 00:17:09.519 "data_size": 63488 00:17:09.519 } 00:17:09.519 ] 00:17:09.519 }' 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.519 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.829 [2024-11-04 14:49:39.680022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.829 [2024-11-04 14:49:39.680256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.829 [2024-11-04 14:49:39.683833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.829 { 00:17:09.829 "results": [ 00:17:09.829 { 00:17:09.829 "job": "raid_bdev1", 00:17:09.829 "core_mask": "0x1", 00:17:09.829 "workload": "randrw", 00:17:09.829 "percentage": 50, 00:17:09.829 "status": "finished", 00:17:09.829 "queue_depth": 1, 00:17:09.829 "io_size": 131072, 00:17:09.829 "runtime": 1.417983, 00:17:09.829 "iops": 8268.082198446667, 00:17:09.829 "mibps": 1033.5102748058334, 00:17:09.829 "io_failed": 0, 00:17:09.829 "io_timeout": 0, 00:17:09.829 "avg_latency_us": 116.69743246177227, 00:17:09.829 "min_latency_us": 41.192727272727275, 00:17:09.829 "max_latency_us": 1861.8181818181818 00:17:09.829 } 00:17:09.829 ], 00:17:09.829 "core_count": 1 00:17:09.829 } 00:17:09.829 [2024-11-04 14:49:39.684093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.829 [2024-11-04 14:49:39.684254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.829 [2024-11-04 14:49:39.684280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69435 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69435 ']' 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69435 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:09.829 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69435 00:17:10.086 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.086 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.086 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69435' 00:17:10.086 killing process with pid 69435 00:17:10.086 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69435 00:17:10.086 [2024-11-04 14:49:39.723514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.086 14:49:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69435 00:17:10.086 [2024-11-04 14:49:39.962234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3JHHZzAw8P 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:11.457 00:17:11.457 real 0m4.918s 00:17:11.457 user 0m5.957s 00:17:11.457 sys 0m0.699s 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.457 14:49:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.457 ************************************ 00:17:11.457 END TEST raid_write_error_test 00:17:11.457 ************************************ 00:17:11.457 14:49:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:17:11.457 14:49:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:11.457 14:49:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:11.457 14:49:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:11.457 14:49:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.457 14:49:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.457 ************************************ 00:17:11.457 START TEST raid_state_function_test 00:17:11.457 ************************************ 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69579 00:17:11.457 Process raid pid: 69579 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69579' 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69579 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69579 ']' 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.457 14:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 [2024-11-04 14:49:41.369502] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:11.715 [2024-11-04 14:49:41.369667] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.715 [2024-11-04 14:49:41.549390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.972 [2024-11-04 14:49:41.696849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.229 [2024-11-04 14:49:41.928611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.229 [2024-11-04 14:49:41.928684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 [2024-11-04 14:49:42.427700] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.794 [2024-11-04 14:49:42.427773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.794 [2024-11-04 14:49:42.427790] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.794 [2024-11-04 14:49:42.427808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.794 [2024-11-04 14:49:42.427818] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:12.794 [2024-11-04 14:49:42.427833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:12.794 [2024-11-04 14:49:42.427843] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:12.794 [2024-11-04 14:49:42.427858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.794 "name": "Existed_Raid", 00:17:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.794 "strip_size_kb": 64, 00:17:12.794 "state": "configuring", 00:17:12.794 "raid_level": "raid0", 00:17:12.794 "superblock": false, 00:17:12.794 "num_base_bdevs": 4, 00:17:12.794 "num_base_bdevs_discovered": 0, 00:17:12.794 "num_base_bdevs_operational": 4, 00:17:12.794 "base_bdevs_list": [ 00:17:12.794 { 00:17:12.794 "name": "BaseBdev1", 00:17:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.794 "is_configured": false, 00:17:12.794 "data_offset": 0, 00:17:12.794 "data_size": 0 00:17:12.794 }, 00:17:12.794 { 00:17:12.794 "name": "BaseBdev2", 00:17:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.794 "is_configured": false, 00:17:12.794 "data_offset": 0, 00:17:12.794 "data_size": 0 00:17:12.794 }, 00:17:12.794 { 00:17:12.794 "name": "BaseBdev3", 00:17:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.794 "is_configured": false, 00:17:12.794 "data_offset": 0, 00:17:12.794 "data_size": 0 00:17:12.794 }, 00:17:12.794 { 00:17:12.794 "name": "BaseBdev4", 00:17:12.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.794 "is_configured": false, 00:17:12.794 "data_offset": 0, 00:17:12.794 "data_size": 0 00:17:12.794 } 00:17:12.794 ] 00:17:12.794 }' 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.794 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 [2024-11-04 14:49:42.955808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.358 [2024-11-04 14:49:42.955864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 [2024-11-04 14:49:42.963760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.358 [2024-11-04 14:49:42.963818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.358 [2024-11-04 14:49:42.963835] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.358 [2024-11-04 14:49:42.963852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.358 [2024-11-04 14:49:42.963862] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.358 [2024-11-04 14:49:42.963878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.358 [2024-11-04 14:49:42.963887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:13.358 [2024-11-04 14:49:42.963902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.358 14:49:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 [2024-11-04 14:49:43.012242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.358 BaseBdev1 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 [ 00:17:13.358 { 00:17:13.358 "name": "BaseBdev1", 00:17:13.358 "aliases": [ 00:17:13.358 "9767cf84-bd9a-4645-8b1a-0d37c1f29da7" 00:17:13.358 ], 00:17:13.358 "product_name": "Malloc disk", 00:17:13.358 "block_size": 512, 00:17:13.358 "num_blocks": 65536, 00:17:13.358 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:13.358 "assigned_rate_limits": { 00:17:13.358 "rw_ios_per_sec": 0, 00:17:13.358 "rw_mbytes_per_sec": 0, 00:17:13.358 "r_mbytes_per_sec": 0, 00:17:13.358 "w_mbytes_per_sec": 0 00:17:13.358 }, 00:17:13.358 "claimed": true, 00:17:13.358 "claim_type": "exclusive_write", 00:17:13.358 "zoned": false, 00:17:13.358 "supported_io_types": { 00:17:13.358 "read": true, 00:17:13.358 "write": true, 00:17:13.358 "unmap": true, 00:17:13.358 "flush": true, 00:17:13.358 "reset": true, 00:17:13.358 "nvme_admin": false, 00:17:13.358 "nvme_io": false, 00:17:13.358 "nvme_io_md": false, 00:17:13.358 "write_zeroes": true, 00:17:13.358 "zcopy": true, 00:17:13.358 "get_zone_info": false, 00:17:13.358 "zone_management": false, 00:17:13.358 "zone_append": false, 00:17:13.358 "compare": false, 00:17:13.358 "compare_and_write": false, 00:17:13.358 "abort": true, 00:17:13.358 "seek_hole": false, 00:17:13.358 "seek_data": false, 00:17:13.358 "copy": true, 00:17:13.358 "nvme_iov_md": false 00:17:13.358 }, 00:17:13.358 "memory_domains": [ 00:17:13.358 { 00:17:13.358 "dma_device_id": "system", 00:17:13.358 "dma_device_type": 1 00:17:13.358 }, 00:17:13.358 { 00:17:13.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.358 "dma_device_type": 2 00:17:13.358 } 00:17:13.358 ], 00:17:13.358 "driver_specific": {} 00:17:13.358 } 00:17:13.358 ] 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.358 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.359 "name": "Existed_Raid", 00:17:13.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.359 "strip_size_kb": 64, 00:17:13.359 "state": "configuring", 00:17:13.359 "raid_level": "raid0", 00:17:13.359 "superblock": false, 00:17:13.359 "num_base_bdevs": 4, 00:17:13.359 "num_base_bdevs_discovered": 1, 00:17:13.359 "num_base_bdevs_operational": 4, 00:17:13.359 "base_bdevs_list": [ 00:17:13.359 { 00:17:13.359 "name": "BaseBdev1", 00:17:13.359 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:13.359 "is_configured": true, 00:17:13.359 "data_offset": 0, 00:17:13.359 "data_size": 65536 00:17:13.359 }, 00:17:13.359 { 00:17:13.359 "name": "BaseBdev2", 00:17:13.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.359 "is_configured": false, 00:17:13.359 "data_offset": 0, 00:17:13.359 "data_size": 0 00:17:13.359 }, 00:17:13.359 { 00:17:13.359 "name": "BaseBdev3", 00:17:13.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.359 "is_configured": false, 00:17:13.359 "data_offset": 0, 00:17:13.359 "data_size": 0 00:17:13.359 }, 00:17:13.359 { 00:17:13.359 "name": "BaseBdev4", 00:17:13.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.359 "is_configured": false, 00:17:13.359 "data_offset": 0, 00:17:13.359 "data_size": 0 00:17:13.359 } 00:17:13.359 ] 00:17:13.359 }' 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.359 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.924 [2024-11-04 14:49:43.560494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.924 [2024-11-04 14:49:43.560574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.924 [2024-11-04 14:49:43.568523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.924 [2024-11-04 14:49:43.571220] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.924 [2024-11-04 14:49:43.571300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.924 [2024-11-04 14:49:43.571326] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.924 [2024-11-04 14:49:43.571356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.924 [2024-11-04 14:49:43.571375] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:13.924 [2024-11-04 14:49:43.571400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.924 "name": "Existed_Raid", 00:17:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.924 "strip_size_kb": 64, 00:17:13.924 "state": "configuring", 00:17:13.924 "raid_level": "raid0", 00:17:13.924 "superblock": false, 00:17:13.924 "num_base_bdevs": 4, 00:17:13.924 "num_base_bdevs_discovered": 1, 00:17:13.924 "num_base_bdevs_operational": 4, 00:17:13.924 "base_bdevs_list": [ 00:17:13.924 { 00:17:13.924 "name": "BaseBdev1", 00:17:13.924 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:13.924 "is_configured": true, 00:17:13.924 "data_offset": 0, 00:17:13.924 "data_size": 65536 00:17:13.924 }, 00:17:13.924 { 00:17:13.924 "name": "BaseBdev2", 00:17:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.924 "is_configured": false, 00:17:13.924 "data_offset": 0, 00:17:13.924 "data_size": 0 00:17:13.924 }, 00:17:13.924 { 00:17:13.924 "name": "BaseBdev3", 00:17:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.924 "is_configured": false, 00:17:13.924 "data_offset": 0, 00:17:13.924 "data_size": 0 00:17:13.924 }, 00:17:13.924 { 00:17:13.924 "name": "BaseBdev4", 00:17:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.924 "is_configured": false, 00:17:13.924 "data_offset": 0, 00:17:13.924 "data_size": 0 00:17:13.924 } 00:17:13.924 ] 00:17:13.924 }' 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.924 14:49:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.491 [2024-11-04 14:49:44.139158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.491 BaseBdev2 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.491 [ 00:17:14.491 { 00:17:14.491 "name": "BaseBdev2", 00:17:14.491 "aliases": [ 00:17:14.491 "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b" 00:17:14.491 ], 00:17:14.491 "product_name": "Malloc disk", 00:17:14.491 "block_size": 512, 00:17:14.491 "num_blocks": 65536, 00:17:14.491 "uuid": "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b", 00:17:14.491 "assigned_rate_limits": { 00:17:14.491 "rw_ios_per_sec": 0, 00:17:14.491 "rw_mbytes_per_sec": 0, 00:17:14.491 "r_mbytes_per_sec": 0, 00:17:14.491 "w_mbytes_per_sec": 0 00:17:14.491 }, 00:17:14.491 "claimed": true, 00:17:14.491 "claim_type": "exclusive_write", 00:17:14.491 "zoned": false, 00:17:14.491 "supported_io_types": { 00:17:14.491 "read": true, 00:17:14.491 "write": true, 00:17:14.491 "unmap": true, 00:17:14.491 "flush": true, 00:17:14.491 "reset": true, 00:17:14.491 "nvme_admin": false, 00:17:14.491 "nvme_io": false, 00:17:14.491 "nvme_io_md": false, 00:17:14.491 "write_zeroes": true, 00:17:14.491 "zcopy": true, 00:17:14.491 "get_zone_info": false, 00:17:14.491 "zone_management": false, 00:17:14.491 "zone_append": false, 00:17:14.491 "compare": false, 00:17:14.491 "compare_and_write": false, 00:17:14.491 "abort": true, 00:17:14.491 "seek_hole": false, 00:17:14.491 "seek_data": false, 00:17:14.491 "copy": true, 00:17:14.491 "nvme_iov_md": false 00:17:14.491 }, 00:17:14.491 "memory_domains": [ 00:17:14.491 { 00:17:14.491 "dma_device_id": "system", 00:17:14.491 "dma_device_type": 1 00:17:14.491 }, 00:17:14.491 { 00:17:14.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.491 "dma_device_type": 2 00:17:14.491 } 00:17:14.491 ], 00:17:14.491 "driver_specific": {} 00:17:14.491 } 00:17:14.491 ] 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:14.491 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.492 "name": "Existed_Raid", 00:17:14.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.492 "strip_size_kb": 64, 00:17:14.492 "state": "configuring", 00:17:14.492 "raid_level": "raid0", 00:17:14.492 "superblock": false, 00:17:14.492 "num_base_bdevs": 4, 00:17:14.492 "num_base_bdevs_discovered": 2, 00:17:14.492 "num_base_bdevs_operational": 4, 00:17:14.492 "base_bdevs_list": [ 00:17:14.492 { 00:17:14.492 "name": "BaseBdev1", 00:17:14.492 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:14.492 "is_configured": true, 00:17:14.492 "data_offset": 0, 00:17:14.492 "data_size": 65536 00:17:14.492 }, 00:17:14.492 { 00:17:14.492 "name": "BaseBdev2", 00:17:14.492 "uuid": "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b", 00:17:14.492 "is_configured": true, 00:17:14.492 "data_offset": 0, 00:17:14.492 "data_size": 65536 00:17:14.492 }, 00:17:14.492 { 00:17:14.492 "name": "BaseBdev3", 00:17:14.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.492 "is_configured": false, 00:17:14.492 "data_offset": 0, 00:17:14.492 "data_size": 0 00:17:14.492 }, 00:17:14.492 { 00:17:14.492 "name": "BaseBdev4", 00:17:14.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.492 "is_configured": false, 00:17:14.492 "data_offset": 0, 00:17:14.492 "data_size": 0 00:17:14.492 } 00:17:14.492 ] 00:17:14.492 }' 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.492 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 [2024-11-04 14:49:44.702535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.058 BaseBdev3 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 [ 00:17:15.058 { 00:17:15.058 "name": "BaseBdev3", 00:17:15.058 "aliases": [ 00:17:15.058 "a05cc133-a8fb-4f40-9953-4b315f9e4d2e" 00:17:15.058 ], 00:17:15.058 "product_name": "Malloc disk", 00:17:15.058 "block_size": 512, 00:17:15.058 "num_blocks": 65536, 00:17:15.058 "uuid": "a05cc133-a8fb-4f40-9953-4b315f9e4d2e", 00:17:15.058 "assigned_rate_limits": { 00:17:15.058 "rw_ios_per_sec": 0, 00:17:15.058 "rw_mbytes_per_sec": 0, 00:17:15.058 "r_mbytes_per_sec": 0, 00:17:15.058 "w_mbytes_per_sec": 0 00:17:15.058 }, 00:17:15.058 "claimed": true, 00:17:15.058 "claim_type": "exclusive_write", 00:17:15.058 "zoned": false, 00:17:15.058 "supported_io_types": { 00:17:15.058 "read": true, 00:17:15.058 "write": true, 00:17:15.058 "unmap": true, 00:17:15.058 "flush": true, 00:17:15.058 "reset": true, 00:17:15.058 "nvme_admin": false, 00:17:15.058 "nvme_io": false, 00:17:15.058 "nvme_io_md": false, 00:17:15.058 "write_zeroes": true, 00:17:15.058 "zcopy": true, 00:17:15.058 "get_zone_info": false, 00:17:15.058 "zone_management": false, 00:17:15.058 "zone_append": false, 00:17:15.058 "compare": false, 00:17:15.058 "compare_and_write": false, 00:17:15.058 "abort": true, 00:17:15.058 "seek_hole": false, 00:17:15.058 "seek_data": false, 00:17:15.058 "copy": true, 00:17:15.058 "nvme_iov_md": false 00:17:15.058 }, 00:17:15.058 "memory_domains": [ 00:17:15.058 { 00:17:15.058 "dma_device_id": "system", 00:17:15.058 "dma_device_type": 1 00:17:15.058 }, 00:17:15.058 { 00:17:15.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.058 "dma_device_type": 2 00:17:15.058 } 00:17:15.058 ], 00:17:15.058 "driver_specific": {} 00:17:15.058 } 00:17:15.058 ] 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.058 "name": "Existed_Raid", 00:17:15.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.058 "strip_size_kb": 64, 00:17:15.058 "state": "configuring", 00:17:15.058 "raid_level": "raid0", 00:17:15.058 "superblock": false, 00:17:15.058 "num_base_bdevs": 4, 00:17:15.058 "num_base_bdevs_discovered": 3, 00:17:15.058 "num_base_bdevs_operational": 4, 00:17:15.058 "base_bdevs_list": [ 00:17:15.058 { 00:17:15.058 "name": "BaseBdev1", 00:17:15.058 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:15.058 "is_configured": true, 00:17:15.058 "data_offset": 0, 00:17:15.058 "data_size": 65536 00:17:15.058 }, 00:17:15.059 { 00:17:15.059 "name": "BaseBdev2", 00:17:15.059 "uuid": "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b", 00:17:15.059 "is_configured": true, 00:17:15.059 "data_offset": 0, 00:17:15.059 "data_size": 65536 00:17:15.059 }, 00:17:15.059 { 00:17:15.059 "name": "BaseBdev3", 00:17:15.059 "uuid": "a05cc133-a8fb-4f40-9953-4b315f9e4d2e", 00:17:15.059 "is_configured": true, 00:17:15.059 "data_offset": 0, 00:17:15.059 "data_size": 65536 00:17:15.059 }, 00:17:15.059 { 00:17:15.059 "name": "BaseBdev4", 00:17:15.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.059 "is_configured": false, 00:17:15.059 "data_offset": 0, 00:17:15.059 "data_size": 0 00:17:15.059 } 00:17:15.059 ] 00:17:15.059 }' 00:17:15.059 14:49:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.059 14:49:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 [2024-11-04 14:49:45.299550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:15.625 [2024-11-04 14:49:45.299634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.625 [2024-11-04 14:49:45.299650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:15.625 [2024-11-04 14:49:45.300066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:15.625 [2024-11-04 14:49:45.300666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.625 [2024-11-04 14:49:45.300744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:15.625 [2024-11-04 14:49:45.301140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.625 BaseBdev4 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.625 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 [ 00:17:15.625 { 00:17:15.625 "name": "BaseBdev4", 00:17:15.625 "aliases": [ 00:17:15.625 "4151672c-e055-49fb-8ae4-cc78c9c524bd" 00:17:15.625 ], 00:17:15.625 "product_name": "Malloc disk", 00:17:15.625 "block_size": 512, 00:17:15.625 "num_blocks": 65536, 00:17:15.625 "uuid": "4151672c-e055-49fb-8ae4-cc78c9c524bd", 00:17:15.625 "assigned_rate_limits": { 00:17:15.625 "rw_ios_per_sec": 0, 00:17:15.625 "rw_mbytes_per_sec": 0, 00:17:15.625 "r_mbytes_per_sec": 0, 00:17:15.625 "w_mbytes_per_sec": 0 00:17:15.625 }, 00:17:15.625 "claimed": true, 00:17:15.625 "claim_type": "exclusive_write", 00:17:15.625 "zoned": false, 00:17:15.625 "supported_io_types": { 00:17:15.625 "read": true, 00:17:15.625 "write": true, 00:17:15.625 "unmap": true, 00:17:15.626 "flush": true, 00:17:15.626 "reset": true, 00:17:15.626 "nvme_admin": false, 00:17:15.626 "nvme_io": false, 00:17:15.626 "nvme_io_md": false, 00:17:15.626 "write_zeroes": true, 00:17:15.626 "zcopy": true, 00:17:15.626 "get_zone_info": false, 00:17:15.626 "zone_management": false, 00:17:15.626 "zone_append": false, 00:17:15.626 "compare": false, 00:17:15.626 "compare_and_write": false, 00:17:15.626 "abort": true, 00:17:15.626 "seek_hole": false, 00:17:15.626 "seek_data": false, 00:17:15.626 "copy": true, 00:17:15.626 "nvme_iov_md": false 00:17:15.626 }, 00:17:15.626 "memory_domains": [ 00:17:15.626 { 00:17:15.626 "dma_device_id": "system", 00:17:15.626 "dma_device_type": 1 00:17:15.626 }, 00:17:15.626 { 00:17:15.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.626 "dma_device_type": 2 00:17:15.626 } 00:17:15.626 ], 00:17:15.626 "driver_specific": {} 00:17:15.626 } 00:17:15.626 ] 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.626 "name": "Existed_Raid", 00:17:15.626 "uuid": "62b725dc-5c0c-4cdb-a08e-0866c7f31373", 00:17:15.626 "strip_size_kb": 64, 00:17:15.626 "state": "online", 00:17:15.626 "raid_level": "raid0", 00:17:15.626 "superblock": false, 00:17:15.626 "num_base_bdevs": 4, 00:17:15.626 "num_base_bdevs_discovered": 4, 00:17:15.626 "num_base_bdevs_operational": 4, 00:17:15.626 "base_bdevs_list": [ 00:17:15.626 { 00:17:15.626 "name": "BaseBdev1", 00:17:15.626 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:15.626 "is_configured": true, 00:17:15.626 "data_offset": 0, 00:17:15.626 "data_size": 65536 00:17:15.626 }, 00:17:15.626 { 00:17:15.626 "name": "BaseBdev2", 00:17:15.626 "uuid": "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b", 00:17:15.626 "is_configured": true, 00:17:15.626 "data_offset": 0, 00:17:15.626 "data_size": 65536 00:17:15.626 }, 00:17:15.626 { 00:17:15.626 "name": "BaseBdev3", 00:17:15.626 "uuid": "a05cc133-a8fb-4f40-9953-4b315f9e4d2e", 00:17:15.626 "is_configured": true, 00:17:15.626 "data_offset": 0, 00:17:15.626 "data_size": 65536 00:17:15.626 }, 00:17:15.626 { 00:17:15.626 "name": "BaseBdev4", 00:17:15.626 "uuid": "4151672c-e055-49fb-8ae4-cc78c9c524bd", 00:17:15.626 "is_configured": true, 00:17:15.626 "data_offset": 0, 00:17:15.626 "data_size": 65536 00:17:15.626 } 00:17:15.626 ] 00:17:15.626 }' 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.626 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.192 [2024-11-04 14:49:45.848325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.192 "name": "Existed_Raid", 00:17:16.192 "aliases": [ 00:17:16.192 "62b725dc-5c0c-4cdb-a08e-0866c7f31373" 00:17:16.192 ], 00:17:16.192 "product_name": "Raid Volume", 00:17:16.192 "block_size": 512, 00:17:16.192 "num_blocks": 262144, 00:17:16.192 "uuid": "62b725dc-5c0c-4cdb-a08e-0866c7f31373", 00:17:16.192 "assigned_rate_limits": { 00:17:16.192 "rw_ios_per_sec": 0, 00:17:16.192 "rw_mbytes_per_sec": 0, 00:17:16.192 "r_mbytes_per_sec": 0, 00:17:16.192 "w_mbytes_per_sec": 0 00:17:16.192 }, 00:17:16.192 "claimed": false, 00:17:16.192 "zoned": false, 00:17:16.192 "supported_io_types": { 00:17:16.192 "read": true, 00:17:16.192 "write": true, 00:17:16.192 "unmap": true, 00:17:16.192 "flush": true, 00:17:16.192 "reset": true, 00:17:16.192 "nvme_admin": false, 00:17:16.192 "nvme_io": false, 00:17:16.192 "nvme_io_md": false, 00:17:16.192 "write_zeroes": true, 00:17:16.192 "zcopy": false, 00:17:16.192 "get_zone_info": false, 00:17:16.192 "zone_management": false, 00:17:16.192 "zone_append": false, 00:17:16.192 "compare": false, 00:17:16.192 "compare_and_write": false, 00:17:16.192 "abort": false, 00:17:16.192 "seek_hole": false, 00:17:16.192 "seek_data": false, 00:17:16.192 "copy": false, 00:17:16.192 "nvme_iov_md": false 00:17:16.192 }, 00:17:16.192 "memory_domains": [ 00:17:16.192 { 00:17:16.192 "dma_device_id": "system", 00:17:16.192 "dma_device_type": 1 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.192 "dma_device_type": 2 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "system", 00:17:16.192 "dma_device_type": 1 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.192 "dma_device_type": 2 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "system", 00:17:16.192 "dma_device_type": 1 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.192 "dma_device_type": 2 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "system", 00:17:16.192 "dma_device_type": 1 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.192 "dma_device_type": 2 00:17:16.192 } 00:17:16.192 ], 00:17:16.192 "driver_specific": { 00:17:16.192 "raid": { 00:17:16.192 "uuid": "62b725dc-5c0c-4cdb-a08e-0866c7f31373", 00:17:16.192 "strip_size_kb": 64, 00:17:16.192 "state": "online", 00:17:16.192 "raid_level": "raid0", 00:17:16.192 "superblock": false, 00:17:16.192 "num_base_bdevs": 4, 00:17:16.192 "num_base_bdevs_discovered": 4, 00:17:16.192 "num_base_bdevs_operational": 4, 00:17:16.192 "base_bdevs_list": [ 00:17:16.192 { 00:17:16.192 "name": "BaseBdev1", 00:17:16.192 "uuid": "9767cf84-bd9a-4645-8b1a-0d37c1f29da7", 00:17:16.192 "is_configured": true, 00:17:16.192 "data_offset": 0, 00:17:16.192 "data_size": 65536 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "name": "BaseBdev2", 00:17:16.192 "uuid": "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b", 00:17:16.192 "is_configured": true, 00:17:16.192 "data_offset": 0, 00:17:16.192 "data_size": 65536 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "name": "BaseBdev3", 00:17:16.192 "uuid": "a05cc133-a8fb-4f40-9953-4b315f9e4d2e", 00:17:16.192 "is_configured": true, 00:17:16.192 "data_offset": 0, 00:17:16.192 "data_size": 65536 00:17:16.192 }, 00:17:16.192 { 00:17:16.192 "name": "BaseBdev4", 00:17:16.192 "uuid": "4151672c-e055-49fb-8ae4-cc78c9c524bd", 00:17:16.192 "is_configured": true, 00:17:16.192 "data_offset": 0, 00:17:16.192 "data_size": 65536 00:17:16.192 } 00:17:16.192 ] 00:17:16.192 } 00:17:16.192 } 00:17:16.192 }' 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:16.192 BaseBdev2 00:17:16.192 BaseBdev3 00:17:16.192 BaseBdev4' 00:17:16.192 14:49:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.192 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.451 [2024-11-04 14:49:46.236139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.451 [2024-11-04 14:49:46.236259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.451 [2024-11-04 14:49:46.236345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.451 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.710 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.710 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.710 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.710 "name": "Existed_Raid", 00:17:16.710 "uuid": "62b725dc-5c0c-4cdb-a08e-0866c7f31373", 00:17:16.710 "strip_size_kb": 64, 00:17:16.710 "state": "offline", 00:17:16.710 "raid_level": "raid0", 00:17:16.710 "superblock": false, 00:17:16.710 "num_base_bdevs": 4, 00:17:16.710 "num_base_bdevs_discovered": 3, 00:17:16.710 "num_base_bdevs_operational": 3, 00:17:16.710 "base_bdevs_list": [ 00:17:16.710 { 00:17:16.710 "name": null, 00:17:16.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.710 "is_configured": false, 00:17:16.710 "data_offset": 0, 00:17:16.710 "data_size": 65536 00:17:16.710 }, 00:17:16.710 { 00:17:16.710 "name": "BaseBdev2", 00:17:16.710 "uuid": "ef40e5bb-c203-4b5c-ab7f-4b6c7ca75f9b", 00:17:16.710 "is_configured": true, 00:17:16.710 "data_offset": 0, 00:17:16.710 "data_size": 65536 00:17:16.710 }, 00:17:16.710 { 00:17:16.710 "name": "BaseBdev3", 00:17:16.710 "uuid": "a05cc133-a8fb-4f40-9953-4b315f9e4d2e", 00:17:16.710 "is_configured": true, 00:17:16.710 "data_offset": 0, 00:17:16.710 "data_size": 65536 00:17:16.710 }, 00:17:16.710 { 00:17:16.710 "name": "BaseBdev4", 00:17:16.710 "uuid": "4151672c-e055-49fb-8ae4-cc78c9c524bd", 00:17:16.710 "is_configured": true, 00:17:16.710 "data_offset": 0, 00:17:16.710 "data_size": 65536 00:17:16.710 } 00:17:16.710 ] 00:17:16.710 }' 00:17:16.710 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.710 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.014 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.283 [2024-11-04 14:49:46.907712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.283 14:49:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.283 [2024-11-04 14:49:47.060145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.283 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.541 [2024-11-04 14:49:47.215735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:17.541 [2024-11-04 14:49:47.215814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.541 BaseBdev2 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.541 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.541 [ 00:17:17.541 { 00:17:17.541 "name": "BaseBdev2", 00:17:17.541 "aliases": [ 00:17:17.541 "8fe1f960-0f2a-4cdb-b4cb-24fd34373784" 00:17:17.541 ], 00:17:17.541 "product_name": "Malloc disk", 00:17:17.541 "block_size": 512, 00:17:17.541 "num_blocks": 65536, 00:17:17.541 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:17.541 "assigned_rate_limits": { 00:17:17.541 "rw_ios_per_sec": 0, 00:17:17.541 "rw_mbytes_per_sec": 0, 00:17:17.541 "r_mbytes_per_sec": 0, 00:17:17.541 "w_mbytes_per_sec": 0 00:17:17.541 }, 00:17:17.541 "claimed": false, 00:17:17.541 "zoned": false, 00:17:17.541 "supported_io_types": { 00:17:17.541 "read": true, 00:17:17.541 "write": true, 00:17:17.541 "unmap": true, 00:17:17.541 "flush": true, 00:17:17.541 "reset": true, 00:17:17.800 "nvme_admin": false, 00:17:17.800 "nvme_io": false, 00:17:17.800 "nvme_io_md": false, 00:17:17.800 "write_zeroes": true, 00:17:17.800 "zcopy": true, 00:17:17.800 "get_zone_info": false, 00:17:17.800 "zone_management": false, 00:17:17.800 "zone_append": false, 00:17:17.800 "compare": false, 00:17:17.800 "compare_and_write": false, 00:17:17.800 "abort": true, 00:17:17.800 "seek_hole": false, 00:17:17.800 "seek_data": false, 00:17:17.800 "copy": true, 00:17:17.800 "nvme_iov_md": false 00:17:17.800 }, 00:17:17.800 "memory_domains": [ 00:17:17.800 { 00:17:17.800 "dma_device_id": "system", 00:17:17.800 "dma_device_type": 1 00:17:17.800 }, 00:17:17.800 { 00:17:17.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.800 "dma_device_type": 2 00:17:17.800 } 00:17:17.800 ], 00:17:17.800 "driver_specific": {} 00:17:17.800 } 00:17:17.800 ] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 BaseBdev3 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 [ 00:17:17.800 { 00:17:17.800 "name": "BaseBdev3", 00:17:17.800 "aliases": [ 00:17:17.800 "e5d3ae15-fb58-40b6-be6f-5d13479ccf90" 00:17:17.800 ], 00:17:17.800 "product_name": "Malloc disk", 00:17:17.800 "block_size": 512, 00:17:17.800 "num_blocks": 65536, 00:17:17.800 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:17.800 "assigned_rate_limits": { 00:17:17.800 "rw_ios_per_sec": 0, 00:17:17.800 "rw_mbytes_per_sec": 0, 00:17:17.800 "r_mbytes_per_sec": 0, 00:17:17.800 "w_mbytes_per_sec": 0 00:17:17.800 }, 00:17:17.800 "claimed": false, 00:17:17.800 "zoned": false, 00:17:17.800 "supported_io_types": { 00:17:17.800 "read": true, 00:17:17.800 "write": true, 00:17:17.800 "unmap": true, 00:17:17.800 "flush": true, 00:17:17.800 "reset": true, 00:17:17.800 "nvme_admin": false, 00:17:17.800 "nvme_io": false, 00:17:17.800 "nvme_io_md": false, 00:17:17.800 "write_zeroes": true, 00:17:17.800 "zcopy": true, 00:17:17.800 "get_zone_info": false, 00:17:17.800 "zone_management": false, 00:17:17.800 "zone_append": false, 00:17:17.800 "compare": false, 00:17:17.800 "compare_and_write": false, 00:17:17.800 "abort": true, 00:17:17.800 "seek_hole": false, 00:17:17.800 "seek_data": false, 00:17:17.800 "copy": true, 00:17:17.800 "nvme_iov_md": false 00:17:17.800 }, 00:17:17.800 "memory_domains": [ 00:17:17.800 { 00:17:17.800 "dma_device_id": "system", 00:17:17.800 "dma_device_type": 1 00:17:17.800 }, 00:17:17.800 { 00:17:17.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.800 "dma_device_type": 2 00:17:17.800 } 00:17:17.800 ], 00:17:17.800 "driver_specific": {} 00:17:17.800 } 00:17:17.800 ] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 BaseBdev4 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.800 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.801 [ 00:17:17.801 { 00:17:17.801 "name": "BaseBdev4", 00:17:17.801 "aliases": [ 00:17:17.801 "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3" 00:17:17.801 ], 00:17:17.801 "product_name": "Malloc disk", 00:17:17.801 "block_size": 512, 00:17:17.801 "num_blocks": 65536, 00:17:17.801 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:17.801 "assigned_rate_limits": { 00:17:17.801 "rw_ios_per_sec": 0, 00:17:17.801 "rw_mbytes_per_sec": 0, 00:17:17.801 "r_mbytes_per_sec": 0, 00:17:17.801 "w_mbytes_per_sec": 0 00:17:17.801 }, 00:17:17.801 "claimed": false, 00:17:17.801 "zoned": false, 00:17:17.801 "supported_io_types": { 00:17:17.801 "read": true, 00:17:17.801 "write": true, 00:17:17.801 "unmap": true, 00:17:17.801 "flush": true, 00:17:17.801 "reset": true, 00:17:17.801 "nvme_admin": false, 00:17:17.801 "nvme_io": false, 00:17:17.801 "nvme_io_md": false, 00:17:17.801 "write_zeroes": true, 00:17:17.801 "zcopy": true, 00:17:17.801 "get_zone_info": false, 00:17:17.801 "zone_management": false, 00:17:17.801 "zone_append": false, 00:17:17.801 "compare": false, 00:17:17.801 "compare_and_write": false, 00:17:17.801 "abort": true, 00:17:17.801 "seek_hole": false, 00:17:17.801 "seek_data": false, 00:17:17.801 "copy": true, 00:17:17.801 "nvme_iov_md": false 00:17:17.801 }, 00:17:17.801 "memory_domains": [ 00:17:17.801 { 00:17:17.801 "dma_device_id": "system", 00:17:17.801 "dma_device_type": 1 00:17:17.801 }, 00:17:17.801 { 00:17:17.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.801 "dma_device_type": 2 00:17:17.801 } 00:17:17.801 ], 00:17:17.801 "driver_specific": {} 00:17:17.801 } 00:17:17.801 ] 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.801 [2024-11-04 14:49:47.610257] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.801 [2024-11-04 14:49:47.610315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.801 [2024-11-04 14:49:47.610347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.801 [2024-11-04 14:49:47.613045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.801 [2024-11-04 14:49:47.613117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.801 "name": "Existed_Raid", 00:17:17.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.801 "strip_size_kb": 64, 00:17:17.801 "state": "configuring", 00:17:17.801 "raid_level": "raid0", 00:17:17.801 "superblock": false, 00:17:17.801 "num_base_bdevs": 4, 00:17:17.801 "num_base_bdevs_discovered": 3, 00:17:17.801 "num_base_bdevs_operational": 4, 00:17:17.801 "base_bdevs_list": [ 00:17:17.801 { 00:17:17.801 "name": "BaseBdev1", 00:17:17.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.801 "is_configured": false, 00:17:17.801 "data_offset": 0, 00:17:17.801 "data_size": 0 00:17:17.801 }, 00:17:17.801 { 00:17:17.801 "name": "BaseBdev2", 00:17:17.801 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:17.801 "is_configured": true, 00:17:17.801 "data_offset": 0, 00:17:17.801 "data_size": 65536 00:17:17.801 }, 00:17:17.801 { 00:17:17.801 "name": "BaseBdev3", 00:17:17.801 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:17.801 "is_configured": true, 00:17:17.801 "data_offset": 0, 00:17:17.801 "data_size": 65536 00:17:17.801 }, 00:17:17.801 { 00:17:17.801 "name": "BaseBdev4", 00:17:17.801 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:17.801 "is_configured": true, 00:17:17.801 "data_offset": 0, 00:17:17.801 "data_size": 65536 00:17:17.801 } 00:17:17.801 ] 00:17:17.801 }' 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.801 14:49:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 [2024-11-04 14:49:48.146463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.368 "name": "Existed_Raid", 00:17:18.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.368 "strip_size_kb": 64, 00:17:18.368 "state": "configuring", 00:17:18.368 "raid_level": "raid0", 00:17:18.368 "superblock": false, 00:17:18.368 "num_base_bdevs": 4, 00:17:18.368 "num_base_bdevs_discovered": 2, 00:17:18.368 "num_base_bdevs_operational": 4, 00:17:18.368 "base_bdevs_list": [ 00:17:18.368 { 00:17:18.368 "name": "BaseBdev1", 00:17:18.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.368 "is_configured": false, 00:17:18.368 "data_offset": 0, 00:17:18.368 "data_size": 0 00:17:18.368 }, 00:17:18.368 { 00:17:18.368 "name": null, 00:17:18.368 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:18.368 "is_configured": false, 00:17:18.368 "data_offset": 0, 00:17:18.368 "data_size": 65536 00:17:18.368 }, 00:17:18.368 { 00:17:18.368 "name": "BaseBdev3", 00:17:18.368 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:18.368 "is_configured": true, 00:17:18.368 "data_offset": 0, 00:17:18.368 "data_size": 65536 00:17:18.368 }, 00:17:18.368 { 00:17:18.368 "name": "BaseBdev4", 00:17:18.368 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:18.368 "is_configured": true, 00:17:18.368 "data_offset": 0, 00:17:18.368 "data_size": 65536 00:17:18.368 } 00:17:18.368 ] 00:17:18.368 }' 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.368 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.935 [2024-11-04 14:49:48.796144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.935 BaseBdev1 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.935 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.935 [ 00:17:18.935 { 00:17:18.935 "name": "BaseBdev1", 00:17:18.935 "aliases": [ 00:17:18.935 "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1" 00:17:18.935 ], 00:17:18.935 "product_name": "Malloc disk", 00:17:18.935 "block_size": 512, 00:17:18.935 "num_blocks": 65536, 00:17:18.935 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:18.935 "assigned_rate_limits": { 00:17:18.935 "rw_ios_per_sec": 0, 00:17:18.935 "rw_mbytes_per_sec": 0, 00:17:18.935 "r_mbytes_per_sec": 0, 00:17:18.935 "w_mbytes_per_sec": 0 00:17:18.935 }, 00:17:18.935 "claimed": true, 00:17:18.935 "claim_type": "exclusive_write", 00:17:18.935 "zoned": false, 00:17:18.935 "supported_io_types": { 00:17:18.935 "read": true, 00:17:18.935 "write": true, 00:17:18.935 "unmap": true, 00:17:18.935 "flush": true, 00:17:18.935 "reset": true, 00:17:18.935 "nvme_admin": false, 00:17:18.935 "nvme_io": false, 00:17:18.935 "nvme_io_md": false, 00:17:18.935 "write_zeroes": true, 00:17:18.935 "zcopy": true, 00:17:18.935 "get_zone_info": false, 00:17:18.935 "zone_management": false, 00:17:18.935 "zone_append": false, 00:17:18.935 "compare": false, 00:17:18.935 "compare_and_write": false, 00:17:18.935 "abort": true, 00:17:18.935 "seek_hole": false, 00:17:18.935 "seek_data": false, 00:17:18.935 "copy": true, 00:17:18.935 "nvme_iov_md": false 00:17:19.193 }, 00:17:19.193 "memory_domains": [ 00:17:19.193 { 00:17:19.193 "dma_device_id": "system", 00:17:19.193 "dma_device_type": 1 00:17:19.193 }, 00:17:19.193 { 00:17:19.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.193 "dma_device_type": 2 00:17:19.193 } 00:17:19.193 ], 00:17:19.193 "driver_specific": {} 00:17:19.193 } 00:17:19.193 ] 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.193 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.193 "name": "Existed_Raid", 00:17:19.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.193 "strip_size_kb": 64, 00:17:19.193 "state": "configuring", 00:17:19.193 "raid_level": "raid0", 00:17:19.193 "superblock": false, 00:17:19.193 "num_base_bdevs": 4, 00:17:19.193 "num_base_bdevs_discovered": 3, 00:17:19.193 "num_base_bdevs_operational": 4, 00:17:19.193 "base_bdevs_list": [ 00:17:19.193 { 00:17:19.193 "name": "BaseBdev1", 00:17:19.193 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:19.193 "is_configured": true, 00:17:19.193 "data_offset": 0, 00:17:19.193 "data_size": 65536 00:17:19.193 }, 00:17:19.193 { 00:17:19.193 "name": null, 00:17:19.193 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:19.193 "is_configured": false, 00:17:19.193 "data_offset": 0, 00:17:19.193 "data_size": 65536 00:17:19.193 }, 00:17:19.193 { 00:17:19.193 "name": "BaseBdev3", 00:17:19.193 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:19.193 "is_configured": true, 00:17:19.193 "data_offset": 0, 00:17:19.193 "data_size": 65536 00:17:19.193 }, 00:17:19.193 { 00:17:19.193 "name": "BaseBdev4", 00:17:19.193 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:19.193 "is_configured": true, 00:17:19.193 "data_offset": 0, 00:17:19.193 "data_size": 65536 00:17:19.193 } 00:17:19.193 ] 00:17:19.194 }' 00:17:19.194 14:49:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.194 14:49:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.760 [2024-11-04 14:49:49.452441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.760 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.761 "name": "Existed_Raid", 00:17:19.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.761 "strip_size_kb": 64, 00:17:19.761 "state": "configuring", 00:17:19.761 "raid_level": "raid0", 00:17:19.761 "superblock": false, 00:17:19.761 "num_base_bdevs": 4, 00:17:19.761 "num_base_bdevs_discovered": 2, 00:17:19.761 "num_base_bdevs_operational": 4, 00:17:19.761 "base_bdevs_list": [ 00:17:19.761 { 00:17:19.761 "name": "BaseBdev1", 00:17:19.761 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:19.761 "is_configured": true, 00:17:19.761 "data_offset": 0, 00:17:19.761 "data_size": 65536 00:17:19.761 }, 00:17:19.761 { 00:17:19.761 "name": null, 00:17:19.761 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:19.761 "is_configured": false, 00:17:19.761 "data_offset": 0, 00:17:19.761 "data_size": 65536 00:17:19.761 }, 00:17:19.761 { 00:17:19.761 "name": null, 00:17:19.761 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:19.761 "is_configured": false, 00:17:19.761 "data_offset": 0, 00:17:19.761 "data_size": 65536 00:17:19.761 }, 00:17:19.761 { 00:17:19.761 "name": "BaseBdev4", 00:17:19.761 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:19.761 "is_configured": true, 00:17:19.761 "data_offset": 0, 00:17:19.761 "data_size": 65536 00:17:19.761 } 00:17:19.761 ] 00:17:19.761 }' 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.761 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.327 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.327 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.327 14:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.327 14:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.327 [2024-11-04 14:49:50.060652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.327 "name": "Existed_Raid", 00:17:20.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.327 "strip_size_kb": 64, 00:17:20.327 "state": "configuring", 00:17:20.327 "raid_level": "raid0", 00:17:20.327 "superblock": false, 00:17:20.327 "num_base_bdevs": 4, 00:17:20.327 "num_base_bdevs_discovered": 3, 00:17:20.327 "num_base_bdevs_operational": 4, 00:17:20.327 "base_bdevs_list": [ 00:17:20.327 { 00:17:20.327 "name": "BaseBdev1", 00:17:20.327 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:20.327 "is_configured": true, 00:17:20.327 "data_offset": 0, 00:17:20.327 "data_size": 65536 00:17:20.327 }, 00:17:20.327 { 00:17:20.327 "name": null, 00:17:20.327 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:20.327 "is_configured": false, 00:17:20.327 "data_offset": 0, 00:17:20.327 "data_size": 65536 00:17:20.327 }, 00:17:20.327 { 00:17:20.327 "name": "BaseBdev3", 00:17:20.327 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:20.327 "is_configured": true, 00:17:20.327 "data_offset": 0, 00:17:20.327 "data_size": 65536 00:17:20.327 }, 00:17:20.327 { 00:17:20.327 "name": "BaseBdev4", 00:17:20.327 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:20.327 "is_configured": true, 00:17:20.327 "data_offset": 0, 00:17:20.327 "data_size": 65536 00:17:20.327 } 00:17:20.327 ] 00:17:20.327 }' 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.327 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.892 [2024-11-04 14:49:50.656973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.892 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.150 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.150 "name": "Existed_Raid", 00:17:21.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.150 "strip_size_kb": 64, 00:17:21.150 "state": "configuring", 00:17:21.150 "raid_level": "raid0", 00:17:21.150 "superblock": false, 00:17:21.150 "num_base_bdevs": 4, 00:17:21.150 "num_base_bdevs_discovered": 2, 00:17:21.150 "num_base_bdevs_operational": 4, 00:17:21.150 "base_bdevs_list": [ 00:17:21.150 { 00:17:21.150 "name": null, 00:17:21.150 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:21.150 "is_configured": false, 00:17:21.150 "data_offset": 0, 00:17:21.150 "data_size": 65536 00:17:21.150 }, 00:17:21.150 { 00:17:21.150 "name": null, 00:17:21.150 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:21.150 "is_configured": false, 00:17:21.150 "data_offset": 0, 00:17:21.150 "data_size": 65536 00:17:21.150 }, 00:17:21.150 { 00:17:21.150 "name": "BaseBdev3", 00:17:21.150 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:21.150 "is_configured": true, 00:17:21.150 "data_offset": 0, 00:17:21.150 "data_size": 65536 00:17:21.150 }, 00:17:21.150 { 00:17:21.150 "name": "BaseBdev4", 00:17:21.150 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:21.150 "is_configured": true, 00:17:21.150 "data_offset": 0, 00:17:21.150 "data_size": 65536 00:17:21.150 } 00:17:21.150 ] 00:17:21.150 }' 00:17:21.150 14:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.150 14:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.717 [2024-11-04 14:49:51.357992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.717 "name": "Existed_Raid", 00:17:21.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.717 "strip_size_kb": 64, 00:17:21.717 "state": "configuring", 00:17:21.717 "raid_level": "raid0", 00:17:21.717 "superblock": false, 00:17:21.717 "num_base_bdevs": 4, 00:17:21.717 "num_base_bdevs_discovered": 3, 00:17:21.717 "num_base_bdevs_operational": 4, 00:17:21.717 "base_bdevs_list": [ 00:17:21.717 { 00:17:21.717 "name": null, 00:17:21.717 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:21.717 "is_configured": false, 00:17:21.717 "data_offset": 0, 00:17:21.717 "data_size": 65536 00:17:21.717 }, 00:17:21.717 { 00:17:21.717 "name": "BaseBdev2", 00:17:21.717 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:21.717 "is_configured": true, 00:17:21.717 "data_offset": 0, 00:17:21.717 "data_size": 65536 00:17:21.717 }, 00:17:21.717 { 00:17:21.717 "name": "BaseBdev3", 00:17:21.717 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:21.717 "is_configured": true, 00:17:21.717 "data_offset": 0, 00:17:21.717 "data_size": 65536 00:17:21.717 }, 00:17:21.717 { 00:17:21.717 "name": "BaseBdev4", 00:17:21.717 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:21.717 "is_configured": true, 00:17:21.717 "data_offset": 0, 00:17:21.717 "data_size": 65536 00:17:21.717 } 00:17:21.717 ] 00:17:21.717 }' 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.717 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.975 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:21.975 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.975 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.975 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.975 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.234 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.235 [2024-11-04 14:49:51.976141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:22.235 [2024-11-04 14:49:51.976208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:22.235 [2024-11-04 14:49:51.976220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:22.235 [2024-11-04 14:49:51.976645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:22.235 [2024-11-04 14:49:51.976866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:22.235 [2024-11-04 14:49:51.976894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:22.235 [2024-11-04 14:49:51.977212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.235 NewBaseBdev 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.235 14:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.235 [ 00:17:22.235 { 00:17:22.235 "name": "NewBaseBdev", 00:17:22.235 "aliases": [ 00:17:22.235 "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1" 00:17:22.235 ], 00:17:22.235 "product_name": "Malloc disk", 00:17:22.235 "block_size": 512, 00:17:22.235 "num_blocks": 65536, 00:17:22.235 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:22.235 "assigned_rate_limits": { 00:17:22.235 "rw_ios_per_sec": 0, 00:17:22.235 "rw_mbytes_per_sec": 0, 00:17:22.235 "r_mbytes_per_sec": 0, 00:17:22.235 "w_mbytes_per_sec": 0 00:17:22.235 }, 00:17:22.235 "claimed": true, 00:17:22.235 "claim_type": "exclusive_write", 00:17:22.235 "zoned": false, 00:17:22.235 "supported_io_types": { 00:17:22.235 "read": true, 00:17:22.235 "write": true, 00:17:22.235 "unmap": true, 00:17:22.235 "flush": true, 00:17:22.235 "reset": true, 00:17:22.235 "nvme_admin": false, 00:17:22.235 "nvme_io": false, 00:17:22.235 "nvme_io_md": false, 00:17:22.235 "write_zeroes": true, 00:17:22.235 "zcopy": true, 00:17:22.235 "get_zone_info": false, 00:17:22.235 "zone_management": false, 00:17:22.235 "zone_append": false, 00:17:22.235 "compare": false, 00:17:22.235 "compare_and_write": false, 00:17:22.235 "abort": true, 00:17:22.235 "seek_hole": false, 00:17:22.235 "seek_data": false, 00:17:22.235 "copy": true, 00:17:22.235 "nvme_iov_md": false 00:17:22.235 }, 00:17:22.235 "memory_domains": [ 00:17:22.235 { 00:17:22.235 "dma_device_id": "system", 00:17:22.235 "dma_device_type": 1 00:17:22.235 }, 00:17:22.235 { 00:17:22.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.235 "dma_device_type": 2 00:17:22.235 } 00:17:22.235 ], 00:17:22.235 "driver_specific": {} 00:17:22.235 } 00:17:22.235 ] 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.235 "name": "Existed_Raid", 00:17:22.235 "uuid": "6c23d091-7ef9-4c27-95bc-83b7de0917b1", 00:17:22.235 "strip_size_kb": 64, 00:17:22.235 "state": "online", 00:17:22.235 "raid_level": "raid0", 00:17:22.235 "superblock": false, 00:17:22.235 "num_base_bdevs": 4, 00:17:22.235 "num_base_bdevs_discovered": 4, 00:17:22.235 "num_base_bdevs_operational": 4, 00:17:22.235 "base_bdevs_list": [ 00:17:22.235 { 00:17:22.235 "name": "NewBaseBdev", 00:17:22.235 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:22.235 "is_configured": true, 00:17:22.235 "data_offset": 0, 00:17:22.235 "data_size": 65536 00:17:22.235 }, 00:17:22.235 { 00:17:22.235 "name": "BaseBdev2", 00:17:22.235 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:22.235 "is_configured": true, 00:17:22.235 "data_offset": 0, 00:17:22.235 "data_size": 65536 00:17:22.235 }, 00:17:22.235 { 00:17:22.235 "name": "BaseBdev3", 00:17:22.235 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:22.235 "is_configured": true, 00:17:22.235 "data_offset": 0, 00:17:22.235 "data_size": 65536 00:17:22.235 }, 00:17:22.235 { 00:17:22.235 "name": "BaseBdev4", 00:17:22.235 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:22.235 "is_configured": true, 00:17:22.235 "data_offset": 0, 00:17:22.235 "data_size": 65536 00:17:22.235 } 00:17:22.235 ] 00:17:22.235 }' 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.235 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:22.801 [2024-11-04 14:49:52.504843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.801 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:22.801 "name": "Existed_Raid", 00:17:22.801 "aliases": [ 00:17:22.801 "6c23d091-7ef9-4c27-95bc-83b7de0917b1" 00:17:22.801 ], 00:17:22.801 "product_name": "Raid Volume", 00:17:22.801 "block_size": 512, 00:17:22.802 "num_blocks": 262144, 00:17:22.802 "uuid": "6c23d091-7ef9-4c27-95bc-83b7de0917b1", 00:17:22.802 "assigned_rate_limits": { 00:17:22.802 "rw_ios_per_sec": 0, 00:17:22.802 "rw_mbytes_per_sec": 0, 00:17:22.802 "r_mbytes_per_sec": 0, 00:17:22.802 "w_mbytes_per_sec": 0 00:17:22.802 }, 00:17:22.802 "claimed": false, 00:17:22.802 "zoned": false, 00:17:22.802 "supported_io_types": { 00:17:22.802 "read": true, 00:17:22.802 "write": true, 00:17:22.802 "unmap": true, 00:17:22.802 "flush": true, 00:17:22.802 "reset": true, 00:17:22.802 "nvme_admin": false, 00:17:22.802 "nvme_io": false, 00:17:22.802 "nvme_io_md": false, 00:17:22.802 "write_zeroes": true, 00:17:22.802 "zcopy": false, 00:17:22.802 "get_zone_info": false, 00:17:22.802 "zone_management": false, 00:17:22.802 "zone_append": false, 00:17:22.802 "compare": false, 00:17:22.802 "compare_and_write": false, 00:17:22.802 "abort": false, 00:17:22.802 "seek_hole": false, 00:17:22.802 "seek_data": false, 00:17:22.802 "copy": false, 00:17:22.802 "nvme_iov_md": false 00:17:22.802 }, 00:17:22.802 "memory_domains": [ 00:17:22.802 { 00:17:22.802 "dma_device_id": "system", 00:17:22.802 "dma_device_type": 1 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.802 "dma_device_type": 2 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "system", 00:17:22.802 "dma_device_type": 1 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.802 "dma_device_type": 2 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "system", 00:17:22.802 "dma_device_type": 1 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.802 "dma_device_type": 2 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "system", 00:17:22.802 "dma_device_type": 1 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.802 "dma_device_type": 2 00:17:22.802 } 00:17:22.802 ], 00:17:22.802 "driver_specific": { 00:17:22.802 "raid": { 00:17:22.802 "uuid": "6c23d091-7ef9-4c27-95bc-83b7de0917b1", 00:17:22.802 "strip_size_kb": 64, 00:17:22.802 "state": "online", 00:17:22.802 "raid_level": "raid0", 00:17:22.802 "superblock": false, 00:17:22.802 "num_base_bdevs": 4, 00:17:22.802 "num_base_bdevs_discovered": 4, 00:17:22.802 "num_base_bdevs_operational": 4, 00:17:22.802 "base_bdevs_list": [ 00:17:22.802 { 00:17:22.802 "name": "NewBaseBdev", 00:17:22.802 "uuid": "5b5c2fa2-3d13-4597-95f5-7ad3ef8355d1", 00:17:22.802 "is_configured": true, 00:17:22.802 "data_offset": 0, 00:17:22.802 "data_size": 65536 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "name": "BaseBdev2", 00:17:22.802 "uuid": "8fe1f960-0f2a-4cdb-b4cb-24fd34373784", 00:17:22.802 "is_configured": true, 00:17:22.802 "data_offset": 0, 00:17:22.802 "data_size": 65536 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "name": "BaseBdev3", 00:17:22.802 "uuid": "e5d3ae15-fb58-40b6-be6f-5d13479ccf90", 00:17:22.802 "is_configured": true, 00:17:22.802 "data_offset": 0, 00:17:22.802 "data_size": 65536 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "name": "BaseBdev4", 00:17:22.802 "uuid": "6a97f3b1-cc52-440b-8a94-7bd96a3f18e3", 00:17:22.802 "is_configured": true, 00:17:22.802 "data_offset": 0, 00:17:22.802 "data_size": 65536 00:17:22.802 } 00:17:22.802 ] 00:17:22.802 } 00:17:22.802 } 00:17:22.802 }' 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:22.802 BaseBdev2 00:17:22.802 BaseBdev3 00:17:22.802 BaseBdev4' 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.802 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.061 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.062 [2024-11-04 14:49:52.868467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.062 [2024-11-04 14:49:52.868507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.062 [2024-11-04 14:49:52.868623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.062 [2024-11-04 14:49:52.868722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.062 [2024-11-04 14:49:52.868755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69579 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69579 ']' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69579 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69579 00:17:23.062 killing process with pid 69579 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69579' 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69579 00:17:23.062 [2024-11-04 14:49:52.904703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.062 14:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69579 00:17:23.629 [2024-11-04 14:49:53.286813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.564 14:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:24.564 00:17:24.564 real 0m13.186s 00:17:24.564 user 0m21.645s 00:17:24.564 sys 0m1.908s 00:17:24.564 14:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:24.564 14:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.564 ************************************ 00:17:24.564 END TEST raid_state_function_test 00:17:24.564 ************************************ 00:17:24.822 14:49:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:24.822 14:49:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:24.822 14:49:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:24.822 14:49:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.822 ************************************ 00:17:24.822 START TEST raid_state_function_test_sb 00:17:24.822 ************************************ 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.822 Process raid pid: 70269 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:24.822 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70269 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70269' 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70269 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70269 ']' 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:24.823 14:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.823 [2024-11-04 14:49:54.641392] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:24.823 [2024-11-04 14:49:54.641956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.081 [2024-11-04 14:49:54.840099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.339 [2024-11-04 14:49:55.019055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.597 [2024-11-04 14:49:55.268415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.597 [2024-11-04 14:49:55.268765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.856 [2024-11-04 14:49:55.626492] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.856 [2024-11-04 14:49:55.626561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.856 [2024-11-04 14:49:55.626578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.856 [2024-11-04 14:49:55.626596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.856 [2024-11-04 14:49:55.626606] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.856 [2024-11-04 14:49:55.626621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.856 [2024-11-04 14:49:55.626631] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:25.856 [2024-11-04 14:49:55.626646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.856 "name": "Existed_Raid", 00:17:25.856 "uuid": "50f0edaf-530c-42d7-b1ae-d00da06a9095", 00:17:25.856 "strip_size_kb": 64, 00:17:25.856 "state": "configuring", 00:17:25.856 "raid_level": "raid0", 00:17:25.856 "superblock": true, 00:17:25.856 "num_base_bdevs": 4, 00:17:25.856 "num_base_bdevs_discovered": 0, 00:17:25.856 "num_base_bdevs_operational": 4, 00:17:25.856 "base_bdevs_list": [ 00:17:25.856 { 00:17:25.856 "name": "BaseBdev1", 00:17:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.856 "is_configured": false, 00:17:25.856 "data_offset": 0, 00:17:25.856 "data_size": 0 00:17:25.856 }, 00:17:25.856 { 00:17:25.856 "name": "BaseBdev2", 00:17:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.856 "is_configured": false, 00:17:25.856 "data_offset": 0, 00:17:25.856 "data_size": 0 00:17:25.856 }, 00:17:25.856 { 00:17:25.856 "name": "BaseBdev3", 00:17:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.856 "is_configured": false, 00:17:25.856 "data_offset": 0, 00:17:25.856 "data_size": 0 00:17:25.856 }, 00:17:25.856 { 00:17:25.856 "name": "BaseBdev4", 00:17:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.856 "is_configured": false, 00:17:25.856 "data_offset": 0, 00:17:25.856 "data_size": 0 00:17:25.856 } 00:17:25.856 ] 00:17:25.856 }' 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.856 14:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.422 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.423 [2024-11-04 14:49:56.142703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.423 [2024-11-04 14:49:56.142767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.423 [2024-11-04 14:49:56.150643] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.423 [2024-11-04 14:49:56.150688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.423 [2024-11-04 14:49:56.150702] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.423 [2024-11-04 14:49:56.150716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.423 [2024-11-04 14:49:56.150724] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.423 [2024-11-04 14:49:56.150738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.423 [2024-11-04 14:49:56.150746] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.423 [2024-11-04 14:49:56.150759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.423 [2024-11-04 14:49:56.203638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.423 BaseBdev1 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.423 [ 00:17:26.423 { 00:17:26.423 "name": "BaseBdev1", 00:17:26.423 "aliases": [ 00:17:26.423 "edce2d73-586f-4149-926f-0f24cf58ca18" 00:17:26.423 ], 00:17:26.423 "product_name": "Malloc disk", 00:17:26.423 "block_size": 512, 00:17:26.423 "num_blocks": 65536, 00:17:26.423 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:26.423 "assigned_rate_limits": { 00:17:26.423 "rw_ios_per_sec": 0, 00:17:26.423 "rw_mbytes_per_sec": 0, 00:17:26.423 "r_mbytes_per_sec": 0, 00:17:26.423 "w_mbytes_per_sec": 0 00:17:26.423 }, 00:17:26.423 "claimed": true, 00:17:26.423 "claim_type": "exclusive_write", 00:17:26.423 "zoned": false, 00:17:26.423 "supported_io_types": { 00:17:26.423 "read": true, 00:17:26.423 "write": true, 00:17:26.423 "unmap": true, 00:17:26.423 "flush": true, 00:17:26.423 "reset": true, 00:17:26.423 "nvme_admin": false, 00:17:26.423 "nvme_io": false, 00:17:26.423 "nvme_io_md": false, 00:17:26.423 "write_zeroes": true, 00:17:26.423 "zcopy": true, 00:17:26.423 "get_zone_info": false, 00:17:26.423 "zone_management": false, 00:17:26.423 "zone_append": false, 00:17:26.423 "compare": false, 00:17:26.423 "compare_and_write": false, 00:17:26.423 "abort": true, 00:17:26.423 "seek_hole": false, 00:17:26.423 "seek_data": false, 00:17:26.423 "copy": true, 00:17:26.423 "nvme_iov_md": false 00:17:26.423 }, 00:17:26.423 "memory_domains": [ 00:17:26.423 { 00:17:26.423 "dma_device_id": "system", 00:17:26.423 "dma_device_type": 1 00:17:26.423 }, 00:17:26.423 { 00:17:26.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.423 "dma_device_type": 2 00:17:26.423 } 00:17:26.423 ], 00:17:26.423 "driver_specific": {} 00:17:26.423 } 00:17:26.423 ] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.423 "name": "Existed_Raid", 00:17:26.423 "uuid": "fb5c8b41-f2ec-442a-9124-b031397dbadc", 00:17:26.423 "strip_size_kb": 64, 00:17:26.423 "state": "configuring", 00:17:26.423 "raid_level": "raid0", 00:17:26.423 "superblock": true, 00:17:26.423 "num_base_bdevs": 4, 00:17:26.423 "num_base_bdevs_discovered": 1, 00:17:26.423 "num_base_bdevs_operational": 4, 00:17:26.423 "base_bdevs_list": [ 00:17:26.423 { 00:17:26.423 "name": "BaseBdev1", 00:17:26.423 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:26.423 "is_configured": true, 00:17:26.423 "data_offset": 2048, 00:17:26.423 "data_size": 63488 00:17:26.423 }, 00:17:26.423 { 00:17:26.423 "name": "BaseBdev2", 00:17:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.423 "is_configured": false, 00:17:26.423 "data_offset": 0, 00:17:26.423 "data_size": 0 00:17:26.423 }, 00:17:26.423 { 00:17:26.423 "name": "BaseBdev3", 00:17:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.423 "is_configured": false, 00:17:26.423 "data_offset": 0, 00:17:26.423 "data_size": 0 00:17:26.423 }, 00:17:26.423 { 00:17:26.423 "name": "BaseBdev4", 00:17:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.423 "is_configured": false, 00:17:26.423 "data_offset": 0, 00:17:26.423 "data_size": 0 00:17:26.423 } 00:17:26.423 ] 00:17:26.423 }' 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.423 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.992 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 [2024-11-04 14:49:56.775839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.993 [2024-11-04 14:49:56.775920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 [2024-11-04 14:49:56.783880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.993 [2024-11-04 14:49:56.786520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.993 [2024-11-04 14:49:56.786570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.993 [2024-11-04 14:49:56.786587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.993 [2024-11-04 14:49:56.786605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.993 [2024-11-04 14:49:56.786616] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.993 [2024-11-04 14:49:56.786630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.993 "name": "Existed_Raid", 00:17:26.993 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:26.993 "strip_size_kb": 64, 00:17:26.993 "state": "configuring", 00:17:26.993 "raid_level": "raid0", 00:17:26.993 "superblock": true, 00:17:26.993 "num_base_bdevs": 4, 00:17:26.993 "num_base_bdevs_discovered": 1, 00:17:26.993 "num_base_bdevs_operational": 4, 00:17:26.993 "base_bdevs_list": [ 00:17:26.993 { 00:17:26.993 "name": "BaseBdev1", 00:17:26.993 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:26.993 "is_configured": true, 00:17:26.993 "data_offset": 2048, 00:17:26.993 "data_size": 63488 00:17:26.993 }, 00:17:26.993 { 00:17:26.993 "name": "BaseBdev2", 00:17:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.993 "is_configured": false, 00:17:26.993 "data_offset": 0, 00:17:26.993 "data_size": 0 00:17:26.993 }, 00:17:26.993 { 00:17:26.993 "name": "BaseBdev3", 00:17:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.993 "is_configured": false, 00:17:26.993 "data_offset": 0, 00:17:26.993 "data_size": 0 00:17:26.993 }, 00:17:26.993 { 00:17:26.993 "name": "BaseBdev4", 00:17:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.993 "is_configured": false, 00:17:26.993 "data_offset": 0, 00:17:26.993 "data_size": 0 00:17:26.993 } 00:17:26.993 ] 00:17:26.993 }' 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.993 14:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 [2024-11-04 14:49:57.354535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.567 BaseBdev2 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.567 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 [ 00:17:27.567 { 00:17:27.567 "name": "BaseBdev2", 00:17:27.567 "aliases": [ 00:17:27.567 "1f926a4c-d7bb-447e-aab6-636a98d97273" 00:17:27.567 ], 00:17:27.567 "product_name": "Malloc disk", 00:17:27.567 "block_size": 512, 00:17:27.567 "num_blocks": 65536, 00:17:27.567 "uuid": "1f926a4c-d7bb-447e-aab6-636a98d97273", 00:17:27.567 "assigned_rate_limits": { 00:17:27.567 "rw_ios_per_sec": 0, 00:17:27.567 "rw_mbytes_per_sec": 0, 00:17:27.568 "r_mbytes_per_sec": 0, 00:17:27.568 "w_mbytes_per_sec": 0 00:17:27.568 }, 00:17:27.568 "claimed": true, 00:17:27.568 "claim_type": "exclusive_write", 00:17:27.568 "zoned": false, 00:17:27.568 "supported_io_types": { 00:17:27.568 "read": true, 00:17:27.568 "write": true, 00:17:27.568 "unmap": true, 00:17:27.568 "flush": true, 00:17:27.568 "reset": true, 00:17:27.568 "nvme_admin": false, 00:17:27.568 "nvme_io": false, 00:17:27.568 "nvme_io_md": false, 00:17:27.568 "write_zeroes": true, 00:17:27.568 "zcopy": true, 00:17:27.568 "get_zone_info": false, 00:17:27.568 "zone_management": false, 00:17:27.568 "zone_append": false, 00:17:27.568 "compare": false, 00:17:27.568 "compare_and_write": false, 00:17:27.568 "abort": true, 00:17:27.568 "seek_hole": false, 00:17:27.568 "seek_data": false, 00:17:27.568 "copy": true, 00:17:27.568 "nvme_iov_md": false 00:17:27.568 }, 00:17:27.568 "memory_domains": [ 00:17:27.568 { 00:17:27.568 "dma_device_id": "system", 00:17:27.568 "dma_device_type": 1 00:17:27.568 }, 00:17:27.568 { 00:17:27.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.568 "dma_device_type": 2 00:17:27.568 } 00:17:27.568 ], 00:17:27.568 "driver_specific": {} 00:17:27.568 } 00:17:27.568 ] 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.568 "name": "Existed_Raid", 00:17:27.568 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:27.568 "strip_size_kb": 64, 00:17:27.568 "state": "configuring", 00:17:27.568 "raid_level": "raid0", 00:17:27.568 "superblock": true, 00:17:27.568 "num_base_bdevs": 4, 00:17:27.568 "num_base_bdevs_discovered": 2, 00:17:27.568 "num_base_bdevs_operational": 4, 00:17:27.568 "base_bdevs_list": [ 00:17:27.568 { 00:17:27.568 "name": "BaseBdev1", 00:17:27.568 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:27.568 "is_configured": true, 00:17:27.568 "data_offset": 2048, 00:17:27.568 "data_size": 63488 00:17:27.568 }, 00:17:27.568 { 00:17:27.568 "name": "BaseBdev2", 00:17:27.568 "uuid": "1f926a4c-d7bb-447e-aab6-636a98d97273", 00:17:27.568 "is_configured": true, 00:17:27.568 "data_offset": 2048, 00:17:27.568 "data_size": 63488 00:17:27.568 }, 00:17:27.568 { 00:17:27.568 "name": "BaseBdev3", 00:17:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.568 "is_configured": false, 00:17:27.568 "data_offset": 0, 00:17:27.568 "data_size": 0 00:17:27.568 }, 00:17:27.568 { 00:17:27.568 "name": "BaseBdev4", 00:17:27.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.568 "is_configured": false, 00:17:27.568 "data_offset": 0, 00:17:27.568 "data_size": 0 00:17:27.568 } 00:17:27.568 ] 00:17:27.568 }' 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.568 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.134 14:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:28.134 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.134 14:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 [2024-11-04 14:49:58.030326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.393 BaseBdev3 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 [ 00:17:28.393 { 00:17:28.393 "name": "BaseBdev3", 00:17:28.393 "aliases": [ 00:17:28.393 "45e66f80-8a35-4ded-8f33-e980a37a78ff" 00:17:28.393 ], 00:17:28.393 "product_name": "Malloc disk", 00:17:28.393 "block_size": 512, 00:17:28.393 "num_blocks": 65536, 00:17:28.393 "uuid": "45e66f80-8a35-4ded-8f33-e980a37a78ff", 00:17:28.393 "assigned_rate_limits": { 00:17:28.393 "rw_ios_per_sec": 0, 00:17:28.393 "rw_mbytes_per_sec": 0, 00:17:28.393 "r_mbytes_per_sec": 0, 00:17:28.393 "w_mbytes_per_sec": 0 00:17:28.393 }, 00:17:28.393 "claimed": true, 00:17:28.393 "claim_type": "exclusive_write", 00:17:28.393 "zoned": false, 00:17:28.393 "supported_io_types": { 00:17:28.393 "read": true, 00:17:28.393 "write": true, 00:17:28.393 "unmap": true, 00:17:28.393 "flush": true, 00:17:28.393 "reset": true, 00:17:28.393 "nvme_admin": false, 00:17:28.393 "nvme_io": false, 00:17:28.393 "nvme_io_md": false, 00:17:28.393 "write_zeroes": true, 00:17:28.393 "zcopy": true, 00:17:28.393 "get_zone_info": false, 00:17:28.393 "zone_management": false, 00:17:28.393 "zone_append": false, 00:17:28.393 "compare": false, 00:17:28.393 "compare_and_write": false, 00:17:28.393 "abort": true, 00:17:28.393 "seek_hole": false, 00:17:28.393 "seek_data": false, 00:17:28.393 "copy": true, 00:17:28.393 "nvme_iov_md": false 00:17:28.393 }, 00:17:28.393 "memory_domains": [ 00:17:28.393 { 00:17:28.393 "dma_device_id": "system", 00:17:28.393 "dma_device_type": 1 00:17:28.393 }, 00:17:28.393 { 00:17:28.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.393 "dma_device_type": 2 00:17:28.393 } 00:17:28.393 ], 00:17:28.393 "driver_specific": {} 00:17:28.393 } 00:17:28.393 ] 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.393 "name": "Existed_Raid", 00:17:28.393 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:28.393 "strip_size_kb": 64, 00:17:28.393 "state": "configuring", 00:17:28.393 "raid_level": "raid0", 00:17:28.393 "superblock": true, 00:17:28.393 "num_base_bdevs": 4, 00:17:28.393 "num_base_bdevs_discovered": 3, 00:17:28.393 "num_base_bdevs_operational": 4, 00:17:28.393 "base_bdevs_list": [ 00:17:28.393 { 00:17:28.393 "name": "BaseBdev1", 00:17:28.393 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:28.393 "is_configured": true, 00:17:28.393 "data_offset": 2048, 00:17:28.393 "data_size": 63488 00:17:28.393 }, 00:17:28.393 { 00:17:28.393 "name": "BaseBdev2", 00:17:28.393 "uuid": "1f926a4c-d7bb-447e-aab6-636a98d97273", 00:17:28.393 "is_configured": true, 00:17:28.393 "data_offset": 2048, 00:17:28.393 "data_size": 63488 00:17:28.393 }, 00:17:28.393 { 00:17:28.393 "name": "BaseBdev3", 00:17:28.393 "uuid": "45e66f80-8a35-4ded-8f33-e980a37a78ff", 00:17:28.393 "is_configured": true, 00:17:28.393 "data_offset": 2048, 00:17:28.393 "data_size": 63488 00:17:28.393 }, 00:17:28.393 { 00:17:28.393 "name": "BaseBdev4", 00:17:28.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.393 "is_configured": false, 00:17:28.393 "data_offset": 0, 00:17:28.393 "data_size": 0 00:17:28.393 } 00:17:28.393 ] 00:17:28.393 }' 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.393 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.960 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:28.960 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.960 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.961 [2024-11-04 14:49:58.653589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.961 [2024-11-04 14:49:58.654010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:28.961 [2024-11-04 14:49:58.654037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:28.961 BaseBdev4 00:17:28.961 [2024-11-04 14:49:58.654410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:28.961 [2024-11-04 14:49:58.654612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:28.961 [2024-11-04 14:49:58.654643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:28.961 [2024-11-04 14:49:58.654828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.961 [ 00:17:28.961 { 00:17:28.961 "name": "BaseBdev4", 00:17:28.961 "aliases": [ 00:17:28.961 "2e081b9a-f029-40d7-8b72-73e9778f4b88" 00:17:28.961 ], 00:17:28.961 "product_name": "Malloc disk", 00:17:28.961 "block_size": 512, 00:17:28.961 "num_blocks": 65536, 00:17:28.961 "uuid": "2e081b9a-f029-40d7-8b72-73e9778f4b88", 00:17:28.961 "assigned_rate_limits": { 00:17:28.961 "rw_ios_per_sec": 0, 00:17:28.961 "rw_mbytes_per_sec": 0, 00:17:28.961 "r_mbytes_per_sec": 0, 00:17:28.961 "w_mbytes_per_sec": 0 00:17:28.961 }, 00:17:28.961 "claimed": true, 00:17:28.961 "claim_type": "exclusive_write", 00:17:28.961 "zoned": false, 00:17:28.961 "supported_io_types": { 00:17:28.961 "read": true, 00:17:28.961 "write": true, 00:17:28.961 "unmap": true, 00:17:28.961 "flush": true, 00:17:28.961 "reset": true, 00:17:28.961 "nvme_admin": false, 00:17:28.961 "nvme_io": false, 00:17:28.961 "nvme_io_md": false, 00:17:28.961 "write_zeroes": true, 00:17:28.961 "zcopy": true, 00:17:28.961 "get_zone_info": false, 00:17:28.961 "zone_management": false, 00:17:28.961 "zone_append": false, 00:17:28.961 "compare": false, 00:17:28.961 "compare_and_write": false, 00:17:28.961 "abort": true, 00:17:28.961 "seek_hole": false, 00:17:28.961 "seek_data": false, 00:17:28.961 "copy": true, 00:17:28.961 "nvme_iov_md": false 00:17:28.961 }, 00:17:28.961 "memory_domains": [ 00:17:28.961 { 00:17:28.961 "dma_device_id": "system", 00:17:28.961 "dma_device_type": 1 00:17:28.961 }, 00:17:28.961 { 00:17:28.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.961 "dma_device_type": 2 00:17:28.961 } 00:17:28.961 ], 00:17:28.961 "driver_specific": {} 00:17:28.961 } 00:17:28.961 ] 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.961 "name": "Existed_Raid", 00:17:28.961 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:28.961 "strip_size_kb": 64, 00:17:28.961 "state": "online", 00:17:28.961 "raid_level": "raid0", 00:17:28.961 "superblock": true, 00:17:28.961 "num_base_bdevs": 4, 00:17:28.961 "num_base_bdevs_discovered": 4, 00:17:28.961 "num_base_bdevs_operational": 4, 00:17:28.961 "base_bdevs_list": [ 00:17:28.961 { 00:17:28.961 "name": "BaseBdev1", 00:17:28.961 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:28.961 "is_configured": true, 00:17:28.961 "data_offset": 2048, 00:17:28.961 "data_size": 63488 00:17:28.961 }, 00:17:28.961 { 00:17:28.961 "name": "BaseBdev2", 00:17:28.961 "uuid": "1f926a4c-d7bb-447e-aab6-636a98d97273", 00:17:28.961 "is_configured": true, 00:17:28.961 "data_offset": 2048, 00:17:28.961 "data_size": 63488 00:17:28.961 }, 00:17:28.961 { 00:17:28.961 "name": "BaseBdev3", 00:17:28.961 "uuid": "45e66f80-8a35-4ded-8f33-e980a37a78ff", 00:17:28.961 "is_configured": true, 00:17:28.961 "data_offset": 2048, 00:17:28.961 "data_size": 63488 00:17:28.961 }, 00:17:28.961 { 00:17:28.961 "name": "BaseBdev4", 00:17:28.961 "uuid": "2e081b9a-f029-40d7-8b72-73e9778f4b88", 00:17:28.961 "is_configured": true, 00:17:28.961 "data_offset": 2048, 00:17:28.961 "data_size": 63488 00:17:28.961 } 00:17:28.961 ] 00:17:28.961 }' 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.961 14:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.527 [2024-11-04 14:49:59.262297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.527 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.527 "name": "Existed_Raid", 00:17:29.527 "aliases": [ 00:17:29.527 "90e71cac-fb7c-4015-b79f-8c3d92d950aa" 00:17:29.527 ], 00:17:29.527 "product_name": "Raid Volume", 00:17:29.527 "block_size": 512, 00:17:29.527 "num_blocks": 253952, 00:17:29.527 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:29.527 "assigned_rate_limits": { 00:17:29.527 "rw_ios_per_sec": 0, 00:17:29.527 "rw_mbytes_per_sec": 0, 00:17:29.527 "r_mbytes_per_sec": 0, 00:17:29.527 "w_mbytes_per_sec": 0 00:17:29.527 }, 00:17:29.527 "claimed": false, 00:17:29.527 "zoned": false, 00:17:29.527 "supported_io_types": { 00:17:29.527 "read": true, 00:17:29.527 "write": true, 00:17:29.527 "unmap": true, 00:17:29.527 "flush": true, 00:17:29.527 "reset": true, 00:17:29.527 "nvme_admin": false, 00:17:29.527 "nvme_io": false, 00:17:29.527 "nvme_io_md": false, 00:17:29.527 "write_zeroes": true, 00:17:29.527 "zcopy": false, 00:17:29.527 "get_zone_info": false, 00:17:29.527 "zone_management": false, 00:17:29.527 "zone_append": false, 00:17:29.527 "compare": false, 00:17:29.527 "compare_and_write": false, 00:17:29.527 "abort": false, 00:17:29.527 "seek_hole": false, 00:17:29.527 "seek_data": false, 00:17:29.527 "copy": false, 00:17:29.527 "nvme_iov_md": false 00:17:29.527 }, 00:17:29.527 "memory_domains": [ 00:17:29.527 { 00:17:29.527 "dma_device_id": "system", 00:17:29.527 "dma_device_type": 1 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.527 "dma_device_type": 2 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "system", 00:17:29.527 "dma_device_type": 1 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.527 "dma_device_type": 2 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "system", 00:17:29.527 "dma_device_type": 1 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.527 "dma_device_type": 2 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "system", 00:17:29.527 "dma_device_type": 1 00:17:29.527 }, 00:17:29.527 { 00:17:29.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.527 "dma_device_type": 2 00:17:29.527 } 00:17:29.527 ], 00:17:29.527 "driver_specific": { 00:17:29.527 "raid": { 00:17:29.527 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:29.527 "strip_size_kb": 64, 00:17:29.527 "state": "online", 00:17:29.527 "raid_level": "raid0", 00:17:29.527 "superblock": true, 00:17:29.527 "num_base_bdevs": 4, 00:17:29.527 "num_base_bdevs_discovered": 4, 00:17:29.527 "num_base_bdevs_operational": 4, 00:17:29.527 "base_bdevs_list": [ 00:17:29.527 { 00:17:29.527 "name": "BaseBdev1", 00:17:29.527 "uuid": "edce2d73-586f-4149-926f-0f24cf58ca18", 00:17:29.528 "is_configured": true, 00:17:29.528 "data_offset": 2048, 00:17:29.528 "data_size": 63488 00:17:29.528 }, 00:17:29.528 { 00:17:29.528 "name": "BaseBdev2", 00:17:29.528 "uuid": "1f926a4c-d7bb-447e-aab6-636a98d97273", 00:17:29.528 "is_configured": true, 00:17:29.528 "data_offset": 2048, 00:17:29.528 "data_size": 63488 00:17:29.528 }, 00:17:29.528 { 00:17:29.528 "name": "BaseBdev3", 00:17:29.528 "uuid": "45e66f80-8a35-4ded-8f33-e980a37a78ff", 00:17:29.528 "is_configured": true, 00:17:29.528 "data_offset": 2048, 00:17:29.528 "data_size": 63488 00:17:29.528 }, 00:17:29.528 { 00:17:29.528 "name": "BaseBdev4", 00:17:29.528 "uuid": "2e081b9a-f029-40d7-8b72-73e9778f4b88", 00:17:29.528 "is_configured": true, 00:17:29.528 "data_offset": 2048, 00:17:29.528 "data_size": 63488 00:17:29.528 } 00:17:29.528 ] 00:17:29.528 } 00:17:29.528 } 00:17:29.528 }' 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:29.528 BaseBdev2 00:17:29.528 BaseBdev3 00:17:29.528 BaseBdev4' 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.528 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.786 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.787 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.787 [2024-11-04 14:49:59.634006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.787 [2024-11-04 14:49:59.634054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.787 [2024-11-04 14:49:59.634131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.045 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.046 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.046 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.046 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.046 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.046 "name": "Existed_Raid", 00:17:30.046 "uuid": "90e71cac-fb7c-4015-b79f-8c3d92d950aa", 00:17:30.046 "strip_size_kb": 64, 00:17:30.046 "state": "offline", 00:17:30.046 "raid_level": "raid0", 00:17:30.046 "superblock": true, 00:17:30.046 "num_base_bdevs": 4, 00:17:30.046 "num_base_bdevs_discovered": 3, 00:17:30.046 "num_base_bdevs_operational": 3, 00:17:30.046 "base_bdevs_list": [ 00:17:30.046 { 00:17:30.046 "name": null, 00:17:30.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.046 "is_configured": false, 00:17:30.046 "data_offset": 0, 00:17:30.046 "data_size": 63488 00:17:30.046 }, 00:17:30.046 { 00:17:30.046 "name": "BaseBdev2", 00:17:30.046 "uuid": "1f926a4c-d7bb-447e-aab6-636a98d97273", 00:17:30.046 "is_configured": true, 00:17:30.046 "data_offset": 2048, 00:17:30.046 "data_size": 63488 00:17:30.046 }, 00:17:30.046 { 00:17:30.046 "name": "BaseBdev3", 00:17:30.046 "uuid": "45e66f80-8a35-4ded-8f33-e980a37a78ff", 00:17:30.046 "is_configured": true, 00:17:30.046 "data_offset": 2048, 00:17:30.046 "data_size": 63488 00:17:30.046 }, 00:17:30.046 { 00:17:30.046 "name": "BaseBdev4", 00:17:30.046 "uuid": "2e081b9a-f029-40d7-8b72-73e9778f4b88", 00:17:30.046 "is_configured": true, 00:17:30.046 "data_offset": 2048, 00:17:30.046 "data_size": 63488 00:17:30.046 } 00:17:30.046 ] 00:17:30.046 }' 00:17:30.046 14:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.046 14:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.633 [2024-11-04 14:50:00.327117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.633 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.633 [2024-11-04 14:50:00.502288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.892 [2024-11-04 14:50:00.652486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:30.892 [2024-11-04 14:50:00.652557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.892 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.151 BaseBdev2 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.151 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.151 [ 00:17:31.151 { 00:17:31.151 "name": "BaseBdev2", 00:17:31.151 "aliases": [ 00:17:31.151 "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a" 00:17:31.151 ], 00:17:31.151 "product_name": "Malloc disk", 00:17:31.151 "block_size": 512, 00:17:31.151 "num_blocks": 65536, 00:17:31.151 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:31.151 "assigned_rate_limits": { 00:17:31.151 "rw_ios_per_sec": 0, 00:17:31.151 "rw_mbytes_per_sec": 0, 00:17:31.151 "r_mbytes_per_sec": 0, 00:17:31.151 "w_mbytes_per_sec": 0 00:17:31.151 }, 00:17:31.151 "claimed": false, 00:17:31.151 "zoned": false, 00:17:31.151 "supported_io_types": { 00:17:31.151 "read": true, 00:17:31.151 "write": true, 00:17:31.151 "unmap": true, 00:17:31.151 "flush": true, 00:17:31.151 "reset": true, 00:17:31.151 "nvme_admin": false, 00:17:31.151 "nvme_io": false, 00:17:31.151 "nvme_io_md": false, 00:17:31.151 "write_zeroes": true, 00:17:31.151 "zcopy": true, 00:17:31.151 "get_zone_info": false, 00:17:31.151 "zone_management": false, 00:17:31.151 "zone_append": false, 00:17:31.151 "compare": false, 00:17:31.151 "compare_and_write": false, 00:17:31.151 "abort": true, 00:17:31.151 "seek_hole": false, 00:17:31.151 "seek_data": false, 00:17:31.151 "copy": true, 00:17:31.151 "nvme_iov_md": false 00:17:31.151 }, 00:17:31.151 "memory_domains": [ 00:17:31.151 { 00:17:31.151 "dma_device_id": "system", 00:17:31.151 "dma_device_type": 1 00:17:31.151 }, 00:17:31.151 { 00:17:31.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.151 "dma_device_type": 2 00:17:31.151 } 00:17:31.152 ], 00:17:31.152 "driver_specific": {} 00:17:31.152 } 00:17:31.152 ] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 BaseBdev3 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 [ 00:17:31.152 { 00:17:31.152 "name": "BaseBdev3", 00:17:31.152 "aliases": [ 00:17:31.152 "53b1dd77-7025-457b-ad0f-e659437b1385" 00:17:31.152 ], 00:17:31.152 "product_name": "Malloc disk", 00:17:31.152 "block_size": 512, 00:17:31.152 "num_blocks": 65536, 00:17:31.152 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:31.152 "assigned_rate_limits": { 00:17:31.152 "rw_ios_per_sec": 0, 00:17:31.152 "rw_mbytes_per_sec": 0, 00:17:31.152 "r_mbytes_per_sec": 0, 00:17:31.152 "w_mbytes_per_sec": 0 00:17:31.152 }, 00:17:31.152 "claimed": false, 00:17:31.152 "zoned": false, 00:17:31.152 "supported_io_types": { 00:17:31.152 "read": true, 00:17:31.152 "write": true, 00:17:31.152 "unmap": true, 00:17:31.152 "flush": true, 00:17:31.152 "reset": true, 00:17:31.152 "nvme_admin": false, 00:17:31.152 "nvme_io": false, 00:17:31.152 "nvme_io_md": false, 00:17:31.152 "write_zeroes": true, 00:17:31.152 "zcopy": true, 00:17:31.152 "get_zone_info": false, 00:17:31.152 "zone_management": false, 00:17:31.152 "zone_append": false, 00:17:31.152 "compare": false, 00:17:31.152 "compare_and_write": false, 00:17:31.152 "abort": true, 00:17:31.152 "seek_hole": false, 00:17:31.152 "seek_data": false, 00:17:31.152 "copy": true, 00:17:31.152 "nvme_iov_md": false 00:17:31.152 }, 00:17:31.152 "memory_domains": [ 00:17:31.152 { 00:17:31.152 "dma_device_id": "system", 00:17:31.152 "dma_device_type": 1 00:17:31.152 }, 00:17:31.152 { 00:17:31.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.152 "dma_device_type": 2 00:17:31.152 } 00:17:31.152 ], 00:17:31.152 "driver_specific": {} 00:17:31.152 } 00:17:31.152 ] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 BaseBdev4 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 [ 00:17:31.152 { 00:17:31.152 "name": "BaseBdev4", 00:17:31.152 "aliases": [ 00:17:31.152 "b4358ae0-b991-42ed-b5fb-bcbd38b8704f" 00:17:31.152 ], 00:17:31.152 "product_name": "Malloc disk", 00:17:31.152 "block_size": 512, 00:17:31.152 "num_blocks": 65536, 00:17:31.152 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:31.152 "assigned_rate_limits": { 00:17:31.152 "rw_ios_per_sec": 0, 00:17:31.152 "rw_mbytes_per_sec": 0, 00:17:31.152 "r_mbytes_per_sec": 0, 00:17:31.152 "w_mbytes_per_sec": 0 00:17:31.152 }, 00:17:31.152 "claimed": false, 00:17:31.152 "zoned": false, 00:17:31.152 "supported_io_types": { 00:17:31.152 "read": true, 00:17:31.152 "write": true, 00:17:31.152 "unmap": true, 00:17:31.152 "flush": true, 00:17:31.152 "reset": true, 00:17:31.152 "nvme_admin": false, 00:17:31.152 "nvme_io": false, 00:17:31.152 "nvme_io_md": false, 00:17:31.152 "write_zeroes": true, 00:17:31.152 "zcopy": true, 00:17:31.152 "get_zone_info": false, 00:17:31.152 "zone_management": false, 00:17:31.152 "zone_append": false, 00:17:31.152 "compare": false, 00:17:31.152 "compare_and_write": false, 00:17:31.152 "abort": true, 00:17:31.152 "seek_hole": false, 00:17:31.152 "seek_data": false, 00:17:31.152 "copy": true, 00:17:31.152 "nvme_iov_md": false 00:17:31.152 }, 00:17:31.152 "memory_domains": [ 00:17:31.152 { 00:17:31.152 "dma_device_id": "system", 00:17:31.152 "dma_device_type": 1 00:17:31.152 }, 00:17:31.152 { 00:17:31.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.152 "dma_device_type": 2 00:17:31.152 } 00:17:31.152 ], 00:17:31.152 "driver_specific": {} 00:17:31.152 } 00:17:31.152 ] 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.152 [2024-11-04 14:50:01.031357] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.152 [2024-11-04 14:50:01.031430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.152 [2024-11-04 14:50:01.031462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.152 [2024-11-04 14:50:01.034153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.152 [2024-11-04 14:50:01.034289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.152 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.153 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.411 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.411 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.411 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.411 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.411 "name": "Existed_Raid", 00:17:31.411 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:31.411 "strip_size_kb": 64, 00:17:31.411 "state": "configuring", 00:17:31.411 "raid_level": "raid0", 00:17:31.411 "superblock": true, 00:17:31.411 "num_base_bdevs": 4, 00:17:31.411 "num_base_bdevs_discovered": 3, 00:17:31.411 "num_base_bdevs_operational": 4, 00:17:31.411 "base_bdevs_list": [ 00:17:31.411 { 00:17:31.411 "name": "BaseBdev1", 00:17:31.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.411 "is_configured": false, 00:17:31.411 "data_offset": 0, 00:17:31.411 "data_size": 0 00:17:31.411 }, 00:17:31.411 { 00:17:31.411 "name": "BaseBdev2", 00:17:31.411 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:31.411 "is_configured": true, 00:17:31.411 "data_offset": 2048, 00:17:31.411 "data_size": 63488 00:17:31.411 }, 00:17:31.411 { 00:17:31.411 "name": "BaseBdev3", 00:17:31.411 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:31.411 "is_configured": true, 00:17:31.411 "data_offset": 2048, 00:17:31.411 "data_size": 63488 00:17:31.411 }, 00:17:31.411 { 00:17:31.411 "name": "BaseBdev4", 00:17:31.411 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:31.411 "is_configured": true, 00:17:31.411 "data_offset": 2048, 00:17:31.411 "data_size": 63488 00:17:31.411 } 00:17:31.411 ] 00:17:31.411 }' 00:17:31.411 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.411 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.671 [2024-11-04 14:50:01.547574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.671 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.930 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.930 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.930 "name": "Existed_Raid", 00:17:31.930 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:31.930 "strip_size_kb": 64, 00:17:31.930 "state": "configuring", 00:17:31.930 "raid_level": "raid0", 00:17:31.930 "superblock": true, 00:17:31.930 "num_base_bdevs": 4, 00:17:31.930 "num_base_bdevs_discovered": 2, 00:17:31.930 "num_base_bdevs_operational": 4, 00:17:31.930 "base_bdevs_list": [ 00:17:31.930 { 00:17:31.930 "name": "BaseBdev1", 00:17:31.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.930 "is_configured": false, 00:17:31.930 "data_offset": 0, 00:17:31.930 "data_size": 0 00:17:31.930 }, 00:17:31.930 { 00:17:31.930 "name": null, 00:17:31.930 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:31.930 "is_configured": false, 00:17:31.930 "data_offset": 0, 00:17:31.930 "data_size": 63488 00:17:31.930 }, 00:17:31.930 { 00:17:31.930 "name": "BaseBdev3", 00:17:31.930 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:31.930 "is_configured": true, 00:17:31.930 "data_offset": 2048, 00:17:31.930 "data_size": 63488 00:17:31.930 }, 00:17:31.930 { 00:17:31.930 "name": "BaseBdev4", 00:17:31.930 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:31.930 "is_configured": true, 00:17:31.930 "data_offset": 2048, 00:17:31.930 "data_size": 63488 00:17:31.930 } 00:17:31.930 ] 00:17:31.930 }' 00:17:31.930 14:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.930 14:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 [2024-11-04 14:50:02.193822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.496 BaseBdev1 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.496 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 [ 00:17:32.496 { 00:17:32.496 "name": "BaseBdev1", 00:17:32.496 "aliases": [ 00:17:32.496 "60c1e357-7e21-47f2-ba5b-82051247dcef" 00:17:32.496 ], 00:17:32.496 "product_name": "Malloc disk", 00:17:32.496 "block_size": 512, 00:17:32.496 "num_blocks": 65536, 00:17:32.496 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:32.496 "assigned_rate_limits": { 00:17:32.496 "rw_ios_per_sec": 0, 00:17:32.496 "rw_mbytes_per_sec": 0, 00:17:32.496 "r_mbytes_per_sec": 0, 00:17:32.496 "w_mbytes_per_sec": 0 00:17:32.496 }, 00:17:32.496 "claimed": true, 00:17:32.496 "claim_type": "exclusive_write", 00:17:32.496 "zoned": false, 00:17:32.496 "supported_io_types": { 00:17:32.496 "read": true, 00:17:32.496 "write": true, 00:17:32.496 "unmap": true, 00:17:32.496 "flush": true, 00:17:32.496 "reset": true, 00:17:32.496 "nvme_admin": false, 00:17:32.496 "nvme_io": false, 00:17:32.496 "nvme_io_md": false, 00:17:32.496 "write_zeroes": true, 00:17:32.496 "zcopy": true, 00:17:32.496 "get_zone_info": false, 00:17:32.496 "zone_management": false, 00:17:32.496 "zone_append": false, 00:17:32.497 "compare": false, 00:17:32.497 "compare_and_write": false, 00:17:32.497 "abort": true, 00:17:32.497 "seek_hole": false, 00:17:32.497 "seek_data": false, 00:17:32.497 "copy": true, 00:17:32.497 "nvme_iov_md": false 00:17:32.497 }, 00:17:32.497 "memory_domains": [ 00:17:32.497 { 00:17:32.497 "dma_device_id": "system", 00:17:32.497 "dma_device_type": 1 00:17:32.497 }, 00:17:32.497 { 00:17:32.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.497 "dma_device_type": 2 00:17:32.497 } 00:17:32.497 ], 00:17:32.497 "driver_specific": {} 00:17:32.497 } 00:17:32.497 ] 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.497 "name": "Existed_Raid", 00:17:32.497 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:32.497 "strip_size_kb": 64, 00:17:32.497 "state": "configuring", 00:17:32.497 "raid_level": "raid0", 00:17:32.497 "superblock": true, 00:17:32.497 "num_base_bdevs": 4, 00:17:32.497 "num_base_bdevs_discovered": 3, 00:17:32.497 "num_base_bdevs_operational": 4, 00:17:32.497 "base_bdevs_list": [ 00:17:32.497 { 00:17:32.497 "name": "BaseBdev1", 00:17:32.497 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:32.497 "is_configured": true, 00:17:32.497 "data_offset": 2048, 00:17:32.497 "data_size": 63488 00:17:32.497 }, 00:17:32.497 { 00:17:32.497 "name": null, 00:17:32.497 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:32.497 "is_configured": false, 00:17:32.497 "data_offset": 0, 00:17:32.497 "data_size": 63488 00:17:32.497 }, 00:17:32.497 { 00:17:32.497 "name": "BaseBdev3", 00:17:32.497 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:32.497 "is_configured": true, 00:17:32.497 "data_offset": 2048, 00:17:32.497 "data_size": 63488 00:17:32.497 }, 00:17:32.497 { 00:17:32.497 "name": "BaseBdev4", 00:17:32.497 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:32.497 "is_configured": true, 00:17:32.497 "data_offset": 2048, 00:17:32.497 "data_size": 63488 00:17:32.497 } 00:17:32.497 ] 00:17:32.497 }' 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.497 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.063 [2024-11-04 14:50:02.866227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.063 "name": "Existed_Raid", 00:17:33.063 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:33.063 "strip_size_kb": 64, 00:17:33.063 "state": "configuring", 00:17:33.063 "raid_level": "raid0", 00:17:33.063 "superblock": true, 00:17:33.063 "num_base_bdevs": 4, 00:17:33.063 "num_base_bdevs_discovered": 2, 00:17:33.063 "num_base_bdevs_operational": 4, 00:17:33.063 "base_bdevs_list": [ 00:17:33.063 { 00:17:33.063 "name": "BaseBdev1", 00:17:33.063 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:33.063 "is_configured": true, 00:17:33.063 "data_offset": 2048, 00:17:33.063 "data_size": 63488 00:17:33.063 }, 00:17:33.063 { 00:17:33.063 "name": null, 00:17:33.063 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:33.063 "is_configured": false, 00:17:33.063 "data_offset": 0, 00:17:33.063 "data_size": 63488 00:17:33.063 }, 00:17:33.063 { 00:17:33.063 "name": null, 00:17:33.063 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:33.063 "is_configured": false, 00:17:33.063 "data_offset": 0, 00:17:33.063 "data_size": 63488 00:17:33.063 }, 00:17:33.063 { 00:17:33.063 "name": "BaseBdev4", 00:17:33.063 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:33.063 "is_configured": true, 00:17:33.063 "data_offset": 2048, 00:17:33.063 "data_size": 63488 00:17:33.063 } 00:17:33.063 ] 00:17:33.063 }' 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.063 14:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 [2024-11-04 14:50:03.458424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.630 "name": "Existed_Raid", 00:17:33.630 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:33.630 "strip_size_kb": 64, 00:17:33.630 "state": "configuring", 00:17:33.630 "raid_level": "raid0", 00:17:33.630 "superblock": true, 00:17:33.630 "num_base_bdevs": 4, 00:17:33.630 "num_base_bdevs_discovered": 3, 00:17:33.630 "num_base_bdevs_operational": 4, 00:17:33.630 "base_bdevs_list": [ 00:17:33.630 { 00:17:33.630 "name": "BaseBdev1", 00:17:33.630 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:33.630 "is_configured": true, 00:17:33.630 "data_offset": 2048, 00:17:33.630 "data_size": 63488 00:17:33.630 }, 00:17:33.630 { 00:17:33.630 "name": null, 00:17:33.630 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:33.630 "is_configured": false, 00:17:33.630 "data_offset": 0, 00:17:33.630 "data_size": 63488 00:17:33.630 }, 00:17:33.630 { 00:17:33.630 "name": "BaseBdev3", 00:17:33.630 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:33.630 "is_configured": true, 00:17:33.630 "data_offset": 2048, 00:17:33.630 "data_size": 63488 00:17:33.630 }, 00:17:33.630 { 00:17:33.630 "name": "BaseBdev4", 00:17:33.630 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:33.630 "is_configured": true, 00:17:33.630 "data_offset": 2048, 00:17:33.630 "data_size": 63488 00:17:33.630 } 00:17:33.630 ] 00:17:33.630 }' 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.630 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.197 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.197 14:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:34.197 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.197 14:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.197 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.197 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:34.197 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:34.197 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.197 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.197 [2024-11-04 14:50:04.050698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.455 "name": "Existed_Raid", 00:17:34.455 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:34.455 "strip_size_kb": 64, 00:17:34.455 "state": "configuring", 00:17:34.455 "raid_level": "raid0", 00:17:34.455 "superblock": true, 00:17:34.455 "num_base_bdevs": 4, 00:17:34.455 "num_base_bdevs_discovered": 2, 00:17:34.455 "num_base_bdevs_operational": 4, 00:17:34.455 "base_bdevs_list": [ 00:17:34.455 { 00:17:34.455 "name": null, 00:17:34.455 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:34.455 "is_configured": false, 00:17:34.455 "data_offset": 0, 00:17:34.455 "data_size": 63488 00:17:34.455 }, 00:17:34.455 { 00:17:34.455 "name": null, 00:17:34.455 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:34.455 "is_configured": false, 00:17:34.455 "data_offset": 0, 00:17:34.455 "data_size": 63488 00:17:34.455 }, 00:17:34.455 { 00:17:34.455 "name": "BaseBdev3", 00:17:34.455 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:34.455 "is_configured": true, 00:17:34.455 "data_offset": 2048, 00:17:34.455 "data_size": 63488 00:17:34.455 }, 00:17:34.455 { 00:17:34.455 "name": "BaseBdev4", 00:17:34.455 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:34.455 "is_configured": true, 00:17:34.455 "data_offset": 2048, 00:17:34.455 "data_size": 63488 00:17:34.455 } 00:17:34.455 ] 00:17:34.455 }' 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.455 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.021 [2024-11-04 14:50:04.753064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.021 "name": "Existed_Raid", 00:17:35.021 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:35.021 "strip_size_kb": 64, 00:17:35.021 "state": "configuring", 00:17:35.021 "raid_level": "raid0", 00:17:35.021 "superblock": true, 00:17:35.021 "num_base_bdevs": 4, 00:17:35.021 "num_base_bdevs_discovered": 3, 00:17:35.021 "num_base_bdevs_operational": 4, 00:17:35.021 "base_bdevs_list": [ 00:17:35.021 { 00:17:35.021 "name": null, 00:17:35.021 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:35.021 "is_configured": false, 00:17:35.021 "data_offset": 0, 00:17:35.021 "data_size": 63488 00:17:35.021 }, 00:17:35.021 { 00:17:35.021 "name": "BaseBdev2", 00:17:35.021 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:35.021 "is_configured": true, 00:17:35.021 "data_offset": 2048, 00:17:35.021 "data_size": 63488 00:17:35.021 }, 00:17:35.021 { 00:17:35.021 "name": "BaseBdev3", 00:17:35.021 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:35.021 "is_configured": true, 00:17:35.021 "data_offset": 2048, 00:17:35.021 "data_size": 63488 00:17:35.021 }, 00:17:35.021 { 00:17:35.021 "name": "BaseBdev4", 00:17:35.021 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:35.021 "is_configured": true, 00:17:35.021 "data_offset": 2048, 00:17:35.021 "data_size": 63488 00:17:35.021 } 00:17:35.021 ] 00:17:35.021 }' 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.021 14:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 60c1e357-7e21-47f2-ba5b-82051247dcef 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 [2024-11-04 14:50:05.424811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:35.587 [2024-11-04 14:50:05.425162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:35.587 [2024-11-04 14:50:05.425188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:35.587 [2024-11-04 14:50:05.425566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:35.587 NewBaseBdev 00:17:35.587 [2024-11-04 14:50:05.425769] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:35.587 [2024-11-04 14:50:05.425792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:35.587 [2024-11-04 14:50:05.425959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.587 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.587 [ 00:17:35.587 { 00:17:35.587 "name": "NewBaseBdev", 00:17:35.587 "aliases": [ 00:17:35.587 "60c1e357-7e21-47f2-ba5b-82051247dcef" 00:17:35.587 ], 00:17:35.587 "product_name": "Malloc disk", 00:17:35.587 "block_size": 512, 00:17:35.587 "num_blocks": 65536, 00:17:35.587 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:35.587 "assigned_rate_limits": { 00:17:35.587 "rw_ios_per_sec": 0, 00:17:35.587 "rw_mbytes_per_sec": 0, 00:17:35.587 "r_mbytes_per_sec": 0, 00:17:35.587 "w_mbytes_per_sec": 0 00:17:35.587 }, 00:17:35.587 "claimed": true, 00:17:35.587 "claim_type": "exclusive_write", 00:17:35.587 "zoned": false, 00:17:35.587 "supported_io_types": { 00:17:35.587 "read": true, 00:17:35.587 "write": true, 00:17:35.587 "unmap": true, 00:17:35.587 "flush": true, 00:17:35.587 "reset": true, 00:17:35.587 "nvme_admin": false, 00:17:35.587 "nvme_io": false, 00:17:35.587 "nvme_io_md": false, 00:17:35.587 "write_zeroes": true, 00:17:35.587 "zcopy": true, 00:17:35.587 "get_zone_info": false, 00:17:35.587 "zone_management": false, 00:17:35.587 "zone_append": false, 00:17:35.587 "compare": false, 00:17:35.587 "compare_and_write": false, 00:17:35.587 "abort": true, 00:17:35.587 "seek_hole": false, 00:17:35.587 "seek_data": false, 00:17:35.587 "copy": true, 00:17:35.587 "nvme_iov_md": false 00:17:35.587 }, 00:17:35.587 "memory_domains": [ 00:17:35.588 { 00:17:35.588 "dma_device_id": "system", 00:17:35.588 "dma_device_type": 1 00:17:35.588 }, 00:17:35.588 { 00:17:35.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.588 "dma_device_type": 2 00:17:35.588 } 00:17:35.588 ], 00:17:35.588 "driver_specific": {} 00:17:35.588 } 00:17:35.588 ] 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.588 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.846 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.846 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.846 "name": "Existed_Raid", 00:17:35.846 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:35.846 "strip_size_kb": 64, 00:17:35.846 "state": "online", 00:17:35.846 "raid_level": "raid0", 00:17:35.846 "superblock": true, 00:17:35.846 "num_base_bdevs": 4, 00:17:35.846 "num_base_bdevs_discovered": 4, 00:17:35.846 "num_base_bdevs_operational": 4, 00:17:35.846 "base_bdevs_list": [ 00:17:35.846 { 00:17:35.846 "name": "NewBaseBdev", 00:17:35.846 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:35.846 "is_configured": true, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 }, 00:17:35.846 { 00:17:35.846 "name": "BaseBdev2", 00:17:35.846 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:35.846 "is_configured": true, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 }, 00:17:35.846 { 00:17:35.846 "name": "BaseBdev3", 00:17:35.846 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:35.846 "is_configured": true, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 }, 00:17:35.846 { 00:17:35.846 "name": "BaseBdev4", 00:17:35.846 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:35.846 "is_configured": true, 00:17:35.846 "data_offset": 2048, 00:17:35.846 "data_size": 63488 00:17:35.846 } 00:17:35.846 ] 00:17:35.846 }' 00:17:35.846 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.846 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.104 14:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.363 [2024-11-04 14:50:05.997617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.363 "name": "Existed_Raid", 00:17:36.363 "aliases": [ 00:17:36.363 "8f3c6630-d064-456f-9acd-248f15b44bee" 00:17:36.363 ], 00:17:36.363 "product_name": "Raid Volume", 00:17:36.363 "block_size": 512, 00:17:36.363 "num_blocks": 253952, 00:17:36.363 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:36.363 "assigned_rate_limits": { 00:17:36.363 "rw_ios_per_sec": 0, 00:17:36.363 "rw_mbytes_per_sec": 0, 00:17:36.363 "r_mbytes_per_sec": 0, 00:17:36.363 "w_mbytes_per_sec": 0 00:17:36.363 }, 00:17:36.363 "claimed": false, 00:17:36.363 "zoned": false, 00:17:36.363 "supported_io_types": { 00:17:36.363 "read": true, 00:17:36.363 "write": true, 00:17:36.363 "unmap": true, 00:17:36.363 "flush": true, 00:17:36.363 "reset": true, 00:17:36.363 "nvme_admin": false, 00:17:36.363 "nvme_io": false, 00:17:36.363 "nvme_io_md": false, 00:17:36.363 "write_zeroes": true, 00:17:36.363 "zcopy": false, 00:17:36.363 "get_zone_info": false, 00:17:36.363 "zone_management": false, 00:17:36.363 "zone_append": false, 00:17:36.363 "compare": false, 00:17:36.363 "compare_and_write": false, 00:17:36.363 "abort": false, 00:17:36.363 "seek_hole": false, 00:17:36.363 "seek_data": false, 00:17:36.363 "copy": false, 00:17:36.363 "nvme_iov_md": false 00:17:36.363 }, 00:17:36.363 "memory_domains": [ 00:17:36.363 { 00:17:36.363 "dma_device_id": "system", 00:17:36.363 "dma_device_type": 1 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.363 "dma_device_type": 2 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "system", 00:17:36.363 "dma_device_type": 1 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.363 "dma_device_type": 2 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "system", 00:17:36.363 "dma_device_type": 1 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.363 "dma_device_type": 2 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "system", 00:17:36.363 "dma_device_type": 1 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.363 "dma_device_type": 2 00:17:36.363 } 00:17:36.363 ], 00:17:36.363 "driver_specific": { 00:17:36.363 "raid": { 00:17:36.363 "uuid": "8f3c6630-d064-456f-9acd-248f15b44bee", 00:17:36.363 "strip_size_kb": 64, 00:17:36.363 "state": "online", 00:17:36.363 "raid_level": "raid0", 00:17:36.363 "superblock": true, 00:17:36.363 "num_base_bdevs": 4, 00:17:36.363 "num_base_bdevs_discovered": 4, 00:17:36.363 "num_base_bdevs_operational": 4, 00:17:36.363 "base_bdevs_list": [ 00:17:36.363 { 00:17:36.363 "name": "NewBaseBdev", 00:17:36.363 "uuid": "60c1e357-7e21-47f2-ba5b-82051247dcef", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 2048, 00:17:36.363 "data_size": 63488 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "name": "BaseBdev2", 00:17:36.363 "uuid": "ee06d5a1-1541-4292-9d0b-d2b4a557ff4a", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 2048, 00:17:36.363 "data_size": 63488 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "name": "BaseBdev3", 00:17:36.363 "uuid": "53b1dd77-7025-457b-ad0f-e659437b1385", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 2048, 00:17:36.363 "data_size": 63488 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "name": "BaseBdev4", 00:17:36.363 "uuid": "b4358ae0-b991-42ed-b5fb-bcbd38b8704f", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 2048, 00:17:36.363 "data_size": 63488 00:17:36.363 } 00:17:36.363 ] 00:17:36.363 } 00:17:36.363 } 00:17:36.363 }' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:36.363 BaseBdev2 00:17:36.363 BaseBdev3 00:17:36.363 BaseBdev4' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.363 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.622 [2024-11-04 14:50:06.389093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.622 [2024-11-04 14:50:06.389139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.622 [2024-11-04 14:50:06.389293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.622 [2024-11-04 14:50:06.389412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.622 [2024-11-04 14:50:06.389437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70269 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70269 ']' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70269 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70269 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:36.622 killing process with pid 70269 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70269' 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70269 00:17:36.622 [2024-11-04 14:50:06.427414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.622 14:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70269 00:17:37.188 [2024-11-04 14:50:06.813482] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.122 14:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:38.122 00:17:38.122 real 0m13.467s 00:17:38.122 user 0m22.169s 00:17:38.122 sys 0m1.988s 00:17:38.122 14:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:38.122 14:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.122 ************************************ 00:17:38.122 END TEST raid_state_function_test_sb 00:17:38.122 ************************************ 00:17:38.380 14:50:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:38.380 14:50:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:38.380 14:50:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:38.380 14:50:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.380 ************************************ 00:17:38.380 START TEST raid_superblock_test 00:17:38.380 ************************************ 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:38.380 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70964 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70964 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70964 ']' 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.381 14:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.381 [2024-11-04 14:50:08.155031] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:38.381 [2024-11-04 14:50:08.155252] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70964 ] 00:17:38.639 [2024-11-04 14:50:08.338910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.640 [2024-11-04 14:50:08.487424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.898 [2024-11-04 14:50:08.722512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.898 [2024-11-04 14:50:08.722602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.467 malloc1 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.467 [2024-11-04 14:50:09.228123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:39.467 [2024-11-04 14:50:09.228206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.467 [2024-11-04 14:50:09.228258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.467 [2024-11-04 14:50:09.228276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.467 [2024-11-04 14:50:09.231369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.467 [2024-11-04 14:50:09.231421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:39.467 pt1 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.467 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.467 malloc2 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.468 [2024-11-04 14:50:09.290138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.468 [2024-11-04 14:50:09.290214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.468 [2024-11-04 14:50:09.290263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.468 [2024-11-04 14:50:09.290280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.468 [2024-11-04 14:50:09.293242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.468 [2024-11-04 14:50:09.293301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.468 pt2 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.468 malloc3 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.468 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.726 [2024-11-04 14:50:09.361631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:39.726 [2024-11-04 14:50:09.361706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.726 [2024-11-04 14:50:09.361743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:39.726 [2024-11-04 14:50:09.361759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.726 [2024-11-04 14:50:09.364734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.726 [2024-11-04 14:50:09.364780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:39.726 pt3 00:17:39.726 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.726 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.726 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.726 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:39.726 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.727 malloc4 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.727 [2024-11-04 14:50:09.422393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:39.727 [2024-11-04 14:50:09.422462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.727 [2024-11-04 14:50:09.422493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:39.727 [2024-11-04 14:50:09.422509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.727 [2024-11-04 14:50:09.425436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.727 [2024-11-04 14:50:09.425489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:39.727 pt4 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.727 [2024-11-04 14:50:09.434421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:39.727 [2024-11-04 14:50:09.437008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.727 [2024-11-04 14:50:09.437113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:39.727 [2024-11-04 14:50:09.437208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:39.727 [2024-11-04 14:50:09.437496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.727 [2024-11-04 14:50:09.437523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:39.727 [2024-11-04 14:50:09.437857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:39.727 [2024-11-04 14:50:09.438094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.727 [2024-11-04 14:50:09.438125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.727 [2024-11-04 14:50:09.438368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.727 "name": "raid_bdev1", 00:17:39.727 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:39.727 "strip_size_kb": 64, 00:17:39.727 "state": "online", 00:17:39.727 "raid_level": "raid0", 00:17:39.727 "superblock": true, 00:17:39.727 "num_base_bdevs": 4, 00:17:39.727 "num_base_bdevs_discovered": 4, 00:17:39.727 "num_base_bdevs_operational": 4, 00:17:39.727 "base_bdevs_list": [ 00:17:39.727 { 00:17:39.727 "name": "pt1", 00:17:39.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.727 "is_configured": true, 00:17:39.727 "data_offset": 2048, 00:17:39.727 "data_size": 63488 00:17:39.727 }, 00:17:39.727 { 00:17:39.727 "name": "pt2", 00:17:39.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.727 "is_configured": true, 00:17:39.727 "data_offset": 2048, 00:17:39.727 "data_size": 63488 00:17:39.727 }, 00:17:39.727 { 00:17:39.727 "name": "pt3", 00:17:39.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:39.727 "is_configured": true, 00:17:39.727 "data_offset": 2048, 00:17:39.727 "data_size": 63488 00:17:39.727 }, 00:17:39.727 { 00:17:39.727 "name": "pt4", 00:17:39.727 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:39.727 "is_configured": true, 00:17:39.727 "data_offset": 2048, 00:17:39.727 "data_size": 63488 00:17:39.727 } 00:17:39.727 ] 00:17:39.727 }' 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.727 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.306 14:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.306 [2024-11-04 14:50:09.991096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.306 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.306 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:40.306 "name": "raid_bdev1", 00:17:40.306 "aliases": [ 00:17:40.306 "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2" 00:17:40.306 ], 00:17:40.306 "product_name": "Raid Volume", 00:17:40.306 "block_size": 512, 00:17:40.306 "num_blocks": 253952, 00:17:40.306 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:40.306 "assigned_rate_limits": { 00:17:40.306 "rw_ios_per_sec": 0, 00:17:40.306 "rw_mbytes_per_sec": 0, 00:17:40.306 "r_mbytes_per_sec": 0, 00:17:40.306 "w_mbytes_per_sec": 0 00:17:40.306 }, 00:17:40.306 "claimed": false, 00:17:40.306 "zoned": false, 00:17:40.306 "supported_io_types": { 00:17:40.306 "read": true, 00:17:40.306 "write": true, 00:17:40.306 "unmap": true, 00:17:40.306 "flush": true, 00:17:40.306 "reset": true, 00:17:40.306 "nvme_admin": false, 00:17:40.306 "nvme_io": false, 00:17:40.306 "nvme_io_md": false, 00:17:40.306 "write_zeroes": true, 00:17:40.306 "zcopy": false, 00:17:40.306 "get_zone_info": false, 00:17:40.306 "zone_management": false, 00:17:40.306 "zone_append": false, 00:17:40.306 "compare": false, 00:17:40.306 "compare_and_write": false, 00:17:40.306 "abort": false, 00:17:40.306 "seek_hole": false, 00:17:40.306 "seek_data": false, 00:17:40.306 "copy": false, 00:17:40.306 "nvme_iov_md": false 00:17:40.306 }, 00:17:40.306 "memory_domains": [ 00:17:40.306 { 00:17:40.306 "dma_device_id": "system", 00:17:40.306 "dma_device_type": 1 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.306 "dma_device_type": 2 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "system", 00:17:40.306 "dma_device_type": 1 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.306 "dma_device_type": 2 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "system", 00:17:40.306 "dma_device_type": 1 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.306 "dma_device_type": 2 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "system", 00:17:40.306 "dma_device_type": 1 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.306 "dma_device_type": 2 00:17:40.306 } 00:17:40.306 ], 00:17:40.306 "driver_specific": { 00:17:40.306 "raid": { 00:17:40.306 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:40.306 "strip_size_kb": 64, 00:17:40.306 "state": "online", 00:17:40.306 "raid_level": "raid0", 00:17:40.306 "superblock": true, 00:17:40.306 "num_base_bdevs": 4, 00:17:40.306 "num_base_bdevs_discovered": 4, 00:17:40.306 "num_base_bdevs_operational": 4, 00:17:40.306 "base_bdevs_list": [ 00:17:40.306 { 00:17:40.306 "name": "pt1", 00:17:40.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.306 "is_configured": true, 00:17:40.306 "data_offset": 2048, 00:17:40.306 "data_size": 63488 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "name": "pt2", 00:17:40.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.306 "is_configured": true, 00:17:40.306 "data_offset": 2048, 00:17:40.306 "data_size": 63488 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "name": "pt3", 00:17:40.306 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.306 "is_configured": true, 00:17:40.306 "data_offset": 2048, 00:17:40.306 "data_size": 63488 00:17:40.306 }, 00:17:40.306 { 00:17:40.306 "name": "pt4", 00:17:40.306 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:40.306 "is_configured": true, 00:17:40.306 "data_offset": 2048, 00:17:40.306 "data_size": 63488 00:17:40.306 } 00:17:40.306 ] 00:17:40.306 } 00:17:40.306 } 00:17:40.306 }' 00:17:40.306 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.306 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:40.306 pt2 00:17:40.306 pt3 00:17:40.306 pt4' 00:17:40.306 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.306 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.307 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.565 [2024-11-04 14:50:10.355020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2 ']' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.565 [2024-11-04 14:50:10.402702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.565 [2024-11-04 14:50:10.402738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.565 [2024-11-04 14:50:10.402883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.565 [2024-11-04 14:50:10.402986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.565 [2024-11-04 14:50:10.403024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:40.565 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.823 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.823 [2024-11-04 14:50:10.550762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.823 [2024-11-04 14:50:10.553563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.823 [2024-11-04 14:50:10.553634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:40.823 [2024-11-04 14:50:10.553690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:40.823 [2024-11-04 14:50:10.553767] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:40.823 [2024-11-04 14:50:10.553838] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:40.823 [2024-11-04 14:50:10.553871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:40.823 [2024-11-04 14:50:10.553907] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:40.823 [2024-11-04 14:50:10.553930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.823 [2024-11-04 14:50:10.553950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:40.823 request: 00:17:40.823 { 00:17:40.823 "name": "raid_bdev1", 00:17:40.823 "raid_level": "raid0", 00:17:40.823 "base_bdevs": [ 00:17:40.823 "malloc1", 00:17:40.823 "malloc2", 00:17:40.823 "malloc3", 00:17:40.823 "malloc4" 00:17:40.823 ], 00:17:40.823 "strip_size_kb": 64, 00:17:40.823 "superblock": false, 00:17:40.823 "method": "bdev_raid_create", 00:17:40.823 "req_id": 1 00:17:40.824 } 00:17:40.824 Got JSON-RPC error response 00:17:40.824 response: 00:17:40.824 { 00:17:40.824 "code": -17, 00:17:40.824 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.824 } 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.824 [2024-11-04 14:50:10.614779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.824 [2024-11-04 14:50:10.614849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.824 [2024-11-04 14:50:10.614877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:40.824 [2024-11-04 14:50:10.614894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.824 [2024-11-04 14:50:10.617998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.824 [2024-11-04 14:50:10.618050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.824 [2024-11-04 14:50:10.618152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:40.824 [2024-11-04 14:50:10.618254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.824 pt1 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.824 "name": "raid_bdev1", 00:17:40.824 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:40.824 "strip_size_kb": 64, 00:17:40.824 "state": "configuring", 00:17:40.824 "raid_level": "raid0", 00:17:40.824 "superblock": true, 00:17:40.824 "num_base_bdevs": 4, 00:17:40.824 "num_base_bdevs_discovered": 1, 00:17:40.824 "num_base_bdevs_operational": 4, 00:17:40.824 "base_bdevs_list": [ 00:17:40.824 { 00:17:40.824 "name": "pt1", 00:17:40.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.824 "is_configured": true, 00:17:40.824 "data_offset": 2048, 00:17:40.824 "data_size": 63488 00:17:40.824 }, 00:17:40.824 { 00:17:40.824 "name": null, 00:17:40.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.824 "is_configured": false, 00:17:40.824 "data_offset": 2048, 00:17:40.824 "data_size": 63488 00:17:40.824 }, 00:17:40.824 { 00:17:40.824 "name": null, 00:17:40.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.824 "is_configured": false, 00:17:40.824 "data_offset": 2048, 00:17:40.824 "data_size": 63488 00:17:40.824 }, 00:17:40.824 { 00:17:40.824 "name": null, 00:17:40.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:40.824 "is_configured": false, 00:17:40.824 "data_offset": 2048, 00:17:40.824 "data_size": 63488 00:17:40.824 } 00:17:40.824 ] 00:17:40.824 }' 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.824 14:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.390 [2024-11-04 14:50:11.151016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.390 [2024-11-04 14:50:11.151126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.390 [2024-11-04 14:50:11.151161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:41.390 [2024-11-04 14:50:11.151180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.390 [2024-11-04 14:50:11.151849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.390 [2024-11-04 14:50:11.151891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.390 [2024-11-04 14:50:11.152029] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.390 [2024-11-04 14:50:11.152076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.390 pt2 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.390 [2024-11-04 14:50:11.159015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.390 "name": "raid_bdev1", 00:17:41.390 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:41.390 "strip_size_kb": 64, 00:17:41.390 "state": "configuring", 00:17:41.390 "raid_level": "raid0", 00:17:41.390 "superblock": true, 00:17:41.390 "num_base_bdevs": 4, 00:17:41.390 "num_base_bdevs_discovered": 1, 00:17:41.390 "num_base_bdevs_operational": 4, 00:17:41.390 "base_bdevs_list": [ 00:17:41.390 { 00:17:41.390 "name": "pt1", 00:17:41.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.390 "is_configured": true, 00:17:41.390 "data_offset": 2048, 00:17:41.390 "data_size": 63488 00:17:41.390 }, 00:17:41.390 { 00:17:41.390 "name": null, 00:17:41.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.390 "is_configured": false, 00:17:41.390 "data_offset": 0, 00:17:41.390 "data_size": 63488 00:17:41.390 }, 00:17:41.390 { 00:17:41.390 "name": null, 00:17:41.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.390 "is_configured": false, 00:17:41.390 "data_offset": 2048, 00:17:41.390 "data_size": 63488 00:17:41.390 }, 00:17:41.390 { 00:17:41.390 "name": null, 00:17:41.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:41.390 "is_configured": false, 00:17:41.390 "data_offset": 2048, 00:17:41.390 "data_size": 63488 00:17:41.390 } 00:17:41.390 ] 00:17:41.390 }' 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.390 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.955 [2024-11-04 14:50:11.683154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.955 [2024-11-04 14:50:11.683264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.955 [2024-11-04 14:50:11.683302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:41.955 [2024-11-04 14:50:11.683319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.955 [2024-11-04 14:50:11.683995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.955 [2024-11-04 14:50:11.684031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.955 [2024-11-04 14:50:11.684173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.955 [2024-11-04 14:50:11.684208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.955 pt2 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.955 [2024-11-04 14:50:11.695091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:41.955 [2024-11-04 14:50:11.695147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.955 [2024-11-04 14:50:11.695182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:41.955 [2024-11-04 14:50:11.695198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.955 [2024-11-04 14:50:11.695689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.955 [2024-11-04 14:50:11.695729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:41.955 [2024-11-04 14:50:11.695816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:41.955 [2024-11-04 14:50:11.695842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:41.955 pt3 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.955 [2024-11-04 14:50:11.707092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:41.955 [2024-11-04 14:50:11.707162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.955 [2024-11-04 14:50:11.707191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:41.955 [2024-11-04 14:50:11.707204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.955 [2024-11-04 14:50:11.707711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.955 [2024-11-04 14:50:11.707746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:41.955 [2024-11-04 14:50:11.707827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:41.955 [2024-11-04 14:50:11.707855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:41.955 [2024-11-04 14:50:11.708026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:41.955 [2024-11-04 14:50:11.708050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:41.955 [2024-11-04 14:50:11.708390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:41.955 [2024-11-04 14:50:11.708587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:41.955 [2024-11-04 14:50:11.708622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:41.955 [2024-11-04 14:50:11.708807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.955 pt4 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.955 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.956 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.956 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.956 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.956 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.956 "name": "raid_bdev1", 00:17:41.956 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:41.956 "strip_size_kb": 64, 00:17:41.956 "state": "online", 00:17:41.956 "raid_level": "raid0", 00:17:41.956 "superblock": true, 00:17:41.956 "num_base_bdevs": 4, 00:17:41.956 "num_base_bdevs_discovered": 4, 00:17:41.956 "num_base_bdevs_operational": 4, 00:17:41.956 "base_bdevs_list": [ 00:17:41.956 { 00:17:41.956 "name": "pt1", 00:17:41.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.956 "is_configured": true, 00:17:41.956 "data_offset": 2048, 00:17:41.956 "data_size": 63488 00:17:41.956 }, 00:17:41.956 { 00:17:41.956 "name": "pt2", 00:17:41.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.956 "is_configured": true, 00:17:41.956 "data_offset": 2048, 00:17:41.956 "data_size": 63488 00:17:41.956 }, 00:17:41.956 { 00:17:41.956 "name": "pt3", 00:17:41.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.956 "is_configured": true, 00:17:41.956 "data_offset": 2048, 00:17:41.956 "data_size": 63488 00:17:41.956 }, 00:17:41.956 { 00:17:41.956 "name": "pt4", 00:17:41.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:41.956 "is_configured": true, 00:17:41.956 "data_offset": 2048, 00:17:41.956 "data_size": 63488 00:17:41.956 } 00:17:41.956 ] 00:17:41.956 }' 00:17:41.956 14:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.956 14:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 [2024-11-04 14:50:12.247753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.521 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:42.521 "name": "raid_bdev1", 00:17:42.521 "aliases": [ 00:17:42.521 "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2" 00:17:42.521 ], 00:17:42.521 "product_name": "Raid Volume", 00:17:42.521 "block_size": 512, 00:17:42.521 "num_blocks": 253952, 00:17:42.521 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:42.521 "assigned_rate_limits": { 00:17:42.521 "rw_ios_per_sec": 0, 00:17:42.521 "rw_mbytes_per_sec": 0, 00:17:42.521 "r_mbytes_per_sec": 0, 00:17:42.521 "w_mbytes_per_sec": 0 00:17:42.521 }, 00:17:42.521 "claimed": false, 00:17:42.521 "zoned": false, 00:17:42.521 "supported_io_types": { 00:17:42.521 "read": true, 00:17:42.521 "write": true, 00:17:42.521 "unmap": true, 00:17:42.521 "flush": true, 00:17:42.521 "reset": true, 00:17:42.521 "nvme_admin": false, 00:17:42.521 "nvme_io": false, 00:17:42.521 "nvme_io_md": false, 00:17:42.521 "write_zeroes": true, 00:17:42.521 "zcopy": false, 00:17:42.521 "get_zone_info": false, 00:17:42.521 "zone_management": false, 00:17:42.521 "zone_append": false, 00:17:42.521 "compare": false, 00:17:42.521 "compare_and_write": false, 00:17:42.521 "abort": false, 00:17:42.521 "seek_hole": false, 00:17:42.521 "seek_data": false, 00:17:42.521 "copy": false, 00:17:42.521 "nvme_iov_md": false 00:17:42.521 }, 00:17:42.522 "memory_domains": [ 00:17:42.522 { 00:17:42.522 "dma_device_id": "system", 00:17:42.522 "dma_device_type": 1 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.522 "dma_device_type": 2 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "system", 00:17:42.522 "dma_device_type": 1 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.522 "dma_device_type": 2 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "system", 00:17:42.522 "dma_device_type": 1 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.522 "dma_device_type": 2 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "system", 00:17:42.522 "dma_device_type": 1 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.522 "dma_device_type": 2 00:17:42.522 } 00:17:42.522 ], 00:17:42.522 "driver_specific": { 00:17:42.522 "raid": { 00:17:42.522 "uuid": "4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2", 00:17:42.522 "strip_size_kb": 64, 00:17:42.522 "state": "online", 00:17:42.522 "raid_level": "raid0", 00:17:42.522 "superblock": true, 00:17:42.522 "num_base_bdevs": 4, 00:17:42.522 "num_base_bdevs_discovered": 4, 00:17:42.522 "num_base_bdevs_operational": 4, 00:17:42.522 "base_bdevs_list": [ 00:17:42.522 { 00:17:42.522 "name": "pt1", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.522 "is_configured": true, 00:17:42.522 "data_offset": 2048, 00:17:42.522 "data_size": 63488 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "name": "pt2", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.522 "is_configured": true, 00:17:42.522 "data_offset": 2048, 00:17:42.522 "data_size": 63488 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "name": "pt3", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:42.522 "is_configured": true, 00:17:42.522 "data_offset": 2048, 00:17:42.522 "data_size": 63488 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "name": "pt4", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:42.522 "is_configured": true, 00:17:42.522 "data_offset": 2048, 00:17:42.522 "data_size": 63488 00:17:42.522 } 00:17:42.522 ] 00:17:42.522 } 00:17:42.522 } 00:17:42.522 }' 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:42.522 pt2 00:17:42.522 pt3 00:17:42.522 pt4' 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.522 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.780 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.781 [2024-11-04 14:50:12.635705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.781 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.039 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2 '!=' 4f3349fe-d63d-49ef-a7ba-b9a0b514c2b2 ']' 00:17:43.039 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70964 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70964 ']' 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70964 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70964 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.040 killing process with pid 70964 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70964' 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70964 00:17:43.040 [2024-11-04 14:50:12.711453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.040 14:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70964 00:17:43.040 [2024-11-04 14:50:12.711564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.040 [2024-11-04 14:50:12.711667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.040 [2024-11-04 14:50:12.711693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:43.297 [2024-11-04 14:50:13.098955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.793 14:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:44.793 00:17:44.793 real 0m6.215s 00:17:44.793 user 0m9.242s 00:17:44.793 sys 0m0.983s 00:17:44.793 14:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:44.793 ************************************ 00:17:44.793 END TEST raid_superblock_test 00:17:44.793 ************************************ 00:17:44.793 14:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.793 14:50:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:17:44.793 14:50:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:44.793 14:50:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:44.793 14:50:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.793 ************************************ 00:17:44.793 START TEST raid_read_error_test 00:17:44.793 ************************************ 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:44.793 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q8gBrl2y1C 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71235 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71235 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71235 ']' 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:44.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:44.794 14:50:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.794 [2024-11-04 14:50:14.410974] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:44.794 [2024-11-04 14:50:14.411150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71235 ] 00:17:44.794 [2024-11-04 14:50:14.596574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.052 [2024-11-04 14:50:14.774300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.310 [2024-11-04 14:50:15.025010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.310 [2024-11-04 14:50:15.025107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.567 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:45.567 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:45.567 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:45.567 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.567 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.567 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 BaseBdev1_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 true 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 [2024-11-04 14:50:15.518412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:45.826 [2024-11-04 14:50:15.518503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.826 [2024-11-04 14:50:15.518537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:45.826 [2024-11-04 14:50:15.518557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.826 [2024-11-04 14:50:15.521699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.826 [2024-11-04 14:50:15.521922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.826 BaseBdev1 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 BaseBdev2_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 true 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 [2024-11-04 14:50:15.583045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:45.826 [2024-11-04 14:50:15.583135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.826 [2024-11-04 14:50:15.583164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:45.826 [2024-11-04 14:50:15.583183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.826 [2024-11-04 14:50:15.586370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.826 [2024-11-04 14:50:15.586424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.826 BaseBdev2 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 BaseBdev3_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 true 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 [2024-11-04 14:50:15.662300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:45.826 [2024-11-04 14:50:15.662389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.826 [2024-11-04 14:50:15.662420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:45.826 [2024-11-04 14:50:15.662439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.826 [2024-11-04 14:50:15.665483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.826 [2024-11-04 14:50:15.665700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:45.826 BaseBdev3 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 BaseBdev4_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.826 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.084 true 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.084 [2024-11-04 14:50:15.724541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:46.084 [2024-11-04 14:50:15.724772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.084 [2024-11-04 14:50:15.724813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:46.084 [2024-11-04 14:50:15.724833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.084 [2024-11-04 14:50:15.727964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.084 [2024-11-04 14:50:15.728161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:46.084 BaseBdev4 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.084 [2024-11-04 14:50:15.732873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.084 [2024-11-04 14:50:15.735631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.084 [2024-11-04 14:50:15.735755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.084 [2024-11-04 14:50:15.735878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.084 [2024-11-04 14:50:15.736250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:46.084 [2024-11-04 14:50:15.736280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:46.084 [2024-11-04 14:50:15.736613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:46.084 [2024-11-04 14:50:15.736864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:46.084 [2024-11-04 14:50:15.736884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:46.084 [2024-11-04 14:50:15.737144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.084 "name": "raid_bdev1", 00:17:46.084 "uuid": "472f8d78-d0af-406a-95ab-d3bea543af78", 00:17:46.084 "strip_size_kb": 64, 00:17:46.084 "state": "online", 00:17:46.084 "raid_level": "raid0", 00:17:46.084 "superblock": true, 00:17:46.084 "num_base_bdevs": 4, 00:17:46.084 "num_base_bdevs_discovered": 4, 00:17:46.084 "num_base_bdevs_operational": 4, 00:17:46.084 "base_bdevs_list": [ 00:17:46.084 { 00:17:46.084 "name": "BaseBdev1", 00:17:46.084 "uuid": "ced49601-00bc-5aa7-a14a-e33061447421", 00:17:46.084 "is_configured": true, 00:17:46.084 "data_offset": 2048, 00:17:46.084 "data_size": 63488 00:17:46.084 }, 00:17:46.084 { 00:17:46.084 "name": "BaseBdev2", 00:17:46.084 "uuid": "b332bc1b-be56-5be3-9875-bbaded6cf1a9", 00:17:46.084 "is_configured": true, 00:17:46.084 "data_offset": 2048, 00:17:46.084 "data_size": 63488 00:17:46.084 }, 00:17:46.084 { 00:17:46.084 "name": "BaseBdev3", 00:17:46.084 "uuid": "edc02fc3-a806-56fc-a3ec-0c56a691b62f", 00:17:46.084 "is_configured": true, 00:17:46.084 "data_offset": 2048, 00:17:46.084 "data_size": 63488 00:17:46.084 }, 00:17:46.084 { 00:17:46.084 "name": "BaseBdev4", 00:17:46.084 "uuid": "4d26ad93-57f2-5dd0-948c-af42dacd74b8", 00:17:46.084 "is_configured": true, 00:17:46.084 "data_offset": 2048, 00:17:46.084 "data_size": 63488 00:17:46.084 } 00:17:46.084 ] 00:17:46.084 }' 00:17:46.084 14:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.085 14:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.650 14:50:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:46.650 14:50:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:46.650 [2024-11-04 14:50:16.382858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.584 "name": "raid_bdev1", 00:17:47.584 "uuid": "472f8d78-d0af-406a-95ab-d3bea543af78", 00:17:47.584 "strip_size_kb": 64, 00:17:47.584 "state": "online", 00:17:47.584 "raid_level": "raid0", 00:17:47.584 "superblock": true, 00:17:47.584 "num_base_bdevs": 4, 00:17:47.584 "num_base_bdevs_discovered": 4, 00:17:47.584 "num_base_bdevs_operational": 4, 00:17:47.584 "base_bdevs_list": [ 00:17:47.584 { 00:17:47.584 "name": "BaseBdev1", 00:17:47.584 "uuid": "ced49601-00bc-5aa7-a14a-e33061447421", 00:17:47.584 "is_configured": true, 00:17:47.584 "data_offset": 2048, 00:17:47.584 "data_size": 63488 00:17:47.584 }, 00:17:47.584 { 00:17:47.584 "name": "BaseBdev2", 00:17:47.584 "uuid": "b332bc1b-be56-5be3-9875-bbaded6cf1a9", 00:17:47.584 "is_configured": true, 00:17:47.584 "data_offset": 2048, 00:17:47.584 "data_size": 63488 00:17:47.584 }, 00:17:47.584 { 00:17:47.584 "name": "BaseBdev3", 00:17:47.584 "uuid": "edc02fc3-a806-56fc-a3ec-0c56a691b62f", 00:17:47.584 "is_configured": true, 00:17:47.584 "data_offset": 2048, 00:17:47.584 "data_size": 63488 00:17:47.584 }, 00:17:47.584 { 00:17:47.584 "name": "BaseBdev4", 00:17:47.584 "uuid": "4d26ad93-57f2-5dd0-948c-af42dacd74b8", 00:17:47.584 "is_configured": true, 00:17:47.584 "data_offset": 2048, 00:17:47.584 "data_size": 63488 00:17:47.584 } 00:17:47.584 ] 00:17:47.584 }' 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.584 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.152 [2024-11-04 14:50:17.846780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.152 [2024-11-04 14:50:17.847003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.152 [2024-11-04 14:50:17.850781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.152 [2024-11-04 14:50:17.850939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.152 [2024-11-04 14:50:17.851013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.152 [2024-11-04 14:50:17.851034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:48.152 { 00:17:48.152 "results": [ 00:17:48.152 { 00:17:48.152 "job": "raid_bdev1", 00:17:48.152 "core_mask": "0x1", 00:17:48.152 "workload": "randrw", 00:17:48.152 "percentage": 50, 00:17:48.152 "status": "finished", 00:17:48.152 "queue_depth": 1, 00:17:48.152 "io_size": 131072, 00:17:48.152 "runtime": 1.461165, 00:17:48.152 "iops": 9272.73784959262, 00:17:48.152 "mibps": 1159.0922311990776, 00:17:48.152 "io_failed": 1, 00:17:48.152 "io_timeout": 0, 00:17:48.152 "avg_latency_us": 152.4260927205636, 00:17:48.152 "min_latency_us": 37.70181818181818, 00:17:48.152 "max_latency_us": 1951.1854545454546 00:17:48.152 } 00:17:48.152 ], 00:17:48.152 "core_count": 1 00:17:48.152 } 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71235 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71235 ']' 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71235 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71235 00:17:48.152 killing process with pid 71235 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71235' 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71235 00:17:48.152 [2024-11-04 14:50:17.901456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.152 14:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71235 00:17:48.410 [2024-11-04 14:50:18.212955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q8gBrl2y1C 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:49.811 ************************************ 00:17:49.811 END TEST raid_read_error_test 00:17:49.811 ************************************ 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:17:49.811 00:17:49.811 real 0m5.073s 00:17:49.811 user 0m6.208s 00:17:49.811 sys 0m0.705s 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.811 14:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.811 14:50:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:17:49.811 14:50:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:49.811 14:50:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:49.811 14:50:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.811 ************************************ 00:17:49.811 START TEST raid_write_error_test 00:17:49.811 ************************************ 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:49.811 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.w2tSVsY0CT 00:17:49.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71386 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71386 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71386 ']' 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:49.812 14:50:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.812 [2024-11-04 14:50:19.565421] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:49.812 [2024-11-04 14:50:19.565648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71386 ] 00:17:50.071 [2024-11-04 14:50:19.749609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.071 [2024-11-04 14:50:19.891507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.329 [2024-11-04 14:50:20.114039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.330 [2024-11-04 14:50:20.114471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 BaseBdev1_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 true 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 [2024-11-04 14:50:20.576890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:50.898 [2024-11-04 14:50:20.576986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.898 [2024-11-04 14:50:20.577035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:50.898 [2024-11-04 14:50:20.577054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.898 [2024-11-04 14:50:20.580455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.898 [2024-11-04 14:50:20.580502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:50.898 BaseBdev1 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 BaseBdev2_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 true 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 [2024-11-04 14:50:20.638969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:50.898 [2024-11-04 14:50:20.639087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.898 [2024-11-04 14:50:20.639117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:50.898 [2024-11-04 14:50:20.639135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.898 [2024-11-04 14:50:20.642550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.898 [2024-11-04 14:50:20.642620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:50.898 BaseBdev2 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 BaseBdev3_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 true 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 [2024-11-04 14:50:20.718623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:50.898 [2024-11-04 14:50:20.718885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.898 [2024-11-04 14:50:20.718940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:50.898 [2024-11-04 14:50:20.718961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.898 [2024-11-04 14:50:20.722410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.898 [2024-11-04 14:50:20.722483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:50.898 BaseBdev3 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 BaseBdev4_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 true 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.898 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.898 [2024-11-04 14:50:20.786941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:50.898 [2024-11-04 14:50:20.787055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.898 [2024-11-04 14:50:20.787088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:50.898 [2024-11-04 14:50:20.787106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.158 [2024-11-04 14:50:20.790609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.158 [2024-11-04 14:50:20.790706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:51.158 BaseBdev4 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.158 [2024-11-04 14:50:20.795076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.158 [2024-11-04 14:50:20.797773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.158 [2024-11-04 14:50:20.797908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.158 [2024-11-04 14:50:20.798005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.158 [2024-11-04 14:50:20.798445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:51.158 [2024-11-04 14:50:20.798473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:51.158 [2024-11-04 14:50:20.798850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:51.158 [2024-11-04 14:50:20.799152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:51.158 [2024-11-04 14:50:20.799174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:51.158 [2024-11-04 14:50:20.799480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.158 "name": "raid_bdev1", 00:17:51.158 "uuid": "76600859-7a4c-4b64-8fbb-4382e425245b", 00:17:51.158 "strip_size_kb": 64, 00:17:51.158 "state": "online", 00:17:51.158 "raid_level": "raid0", 00:17:51.158 "superblock": true, 00:17:51.158 "num_base_bdevs": 4, 00:17:51.158 "num_base_bdevs_discovered": 4, 00:17:51.158 "num_base_bdevs_operational": 4, 00:17:51.158 "base_bdevs_list": [ 00:17:51.158 { 00:17:51.158 "name": "BaseBdev1", 00:17:51.158 "uuid": "f2715db1-719a-5e71-914d-c9334c00fe54", 00:17:51.158 "is_configured": true, 00:17:51.158 "data_offset": 2048, 00:17:51.158 "data_size": 63488 00:17:51.158 }, 00:17:51.158 { 00:17:51.158 "name": "BaseBdev2", 00:17:51.158 "uuid": "81d4dfd0-a188-5d12-a655-503a75162ffa", 00:17:51.158 "is_configured": true, 00:17:51.158 "data_offset": 2048, 00:17:51.158 "data_size": 63488 00:17:51.158 }, 00:17:51.158 { 00:17:51.158 "name": "BaseBdev3", 00:17:51.158 "uuid": "5233c66d-0d9d-5418-a3fa-287fa42b3fcb", 00:17:51.158 "is_configured": true, 00:17:51.158 "data_offset": 2048, 00:17:51.158 "data_size": 63488 00:17:51.158 }, 00:17:51.158 { 00:17:51.158 "name": "BaseBdev4", 00:17:51.158 "uuid": "03af7e4a-4a32-5e30-a873-44a4c9606e78", 00:17:51.158 "is_configured": true, 00:17:51.158 "data_offset": 2048, 00:17:51.158 "data_size": 63488 00:17:51.158 } 00:17:51.158 ] 00:17:51.158 }' 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.158 14:50:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.725 14:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:51.725 14:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:51.725 [2024-11-04 14:50:21.481397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.661 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.661 "name": "raid_bdev1", 00:17:52.661 "uuid": "76600859-7a4c-4b64-8fbb-4382e425245b", 00:17:52.661 "strip_size_kb": 64, 00:17:52.661 "state": "online", 00:17:52.661 "raid_level": "raid0", 00:17:52.661 "superblock": true, 00:17:52.661 "num_base_bdevs": 4, 00:17:52.661 "num_base_bdevs_discovered": 4, 00:17:52.661 "num_base_bdevs_operational": 4, 00:17:52.661 "base_bdevs_list": [ 00:17:52.661 { 00:17:52.661 "name": "BaseBdev1", 00:17:52.661 "uuid": "f2715db1-719a-5e71-914d-c9334c00fe54", 00:17:52.661 "is_configured": true, 00:17:52.661 "data_offset": 2048, 00:17:52.661 "data_size": 63488 00:17:52.661 }, 00:17:52.661 { 00:17:52.661 "name": "BaseBdev2", 00:17:52.661 "uuid": "81d4dfd0-a188-5d12-a655-503a75162ffa", 00:17:52.661 "is_configured": true, 00:17:52.661 "data_offset": 2048, 00:17:52.661 "data_size": 63488 00:17:52.661 }, 00:17:52.661 { 00:17:52.661 "name": "BaseBdev3", 00:17:52.662 "uuid": "5233c66d-0d9d-5418-a3fa-287fa42b3fcb", 00:17:52.662 "is_configured": true, 00:17:52.662 "data_offset": 2048, 00:17:52.662 "data_size": 63488 00:17:52.662 }, 00:17:52.662 { 00:17:52.662 "name": "BaseBdev4", 00:17:52.662 "uuid": "03af7e4a-4a32-5e30-a873-44a4c9606e78", 00:17:52.662 "is_configured": true, 00:17:52.662 "data_offset": 2048, 00:17:52.662 "data_size": 63488 00:17:52.662 } 00:17:52.662 ] 00:17:52.662 }' 00:17:52.662 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.662 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.229 [2024-11-04 14:50:22.928516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.229 [2024-11-04 14:50:22.928782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.229 [2024-11-04 14:50:22.932343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.229 [2024-11-04 14:50:22.932638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.229 [2024-11-04 14:50:22.932714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.229 [2024-11-04 14:50:22.932735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:53.229 { 00:17:53.229 "results": [ 00:17:53.229 { 00:17:53.229 "job": "raid_bdev1", 00:17:53.229 "core_mask": "0x1", 00:17:53.229 "workload": "randrw", 00:17:53.229 "percentage": 50, 00:17:53.229 "status": "finished", 00:17:53.229 "queue_depth": 1, 00:17:53.229 "io_size": 131072, 00:17:53.229 "runtime": 1.444655, 00:17:53.229 "iops": 9290.107326662766, 00:17:53.229 "mibps": 1161.2634158328458, 00:17:53.229 "io_failed": 1, 00:17:53.229 "io_timeout": 0, 00:17:53.229 "avg_latency_us": 151.9778274474743, 00:17:53.229 "min_latency_us": 37.93454545454546, 00:17:53.229 "max_latency_us": 1980.9745454545455 00:17:53.229 } 00:17:53.229 ], 00:17:53.229 "core_count": 1 00:17:53.229 } 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71386 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71386 ']' 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71386 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:53.229 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:53.230 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71386 00:17:53.230 killing process with pid 71386 00:17:53.230 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:53.230 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:53.230 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71386' 00:17:53.230 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71386 00:17:53.230 [2024-11-04 14:50:22.972199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.230 14:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71386 00:17:53.488 [2024-11-04 14:50:23.283981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.w2tSVsY0CT 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:54.868 ************************************ 00:17:54.868 END TEST raid_write_error_test 00:17:54.868 ************************************ 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.868 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:54.869 14:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:17:54.869 00:17:54.869 real 0m5.161s 00:17:54.869 user 0m6.220s 00:17:54.869 sys 0m0.721s 00:17:54.869 14:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:54.869 14:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.869 14:50:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:54.869 14:50:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:54.869 14:50:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:54.869 14:50:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:54.869 14:50:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.869 ************************************ 00:17:54.869 START TEST raid_state_function_test 00:17:54.869 ************************************ 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71534 00:17:54.869 Process raid pid: 71534 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71534' 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71534 00:17:54.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71534 ']' 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:54.869 14:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.128 [2024-11-04 14:50:24.776429] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:17:55.128 [2024-11-04 14:50:24.776912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.128 [2024-11-04 14:50:24.960344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.387 [2024-11-04 14:50:25.130132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.646 [2024-11-04 14:50:25.389619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.646 [2024-11-04 14:50:25.389688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.904 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.904 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:55.904 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:55.904 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.904 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.904 [2024-11-04 14:50:25.792784] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.904 [2024-11-04 14:50:25.792862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.904 [2024-11-04 14:50:25.792882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.904 [2024-11-04 14:50:25.792899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.904 [2024-11-04 14:50:25.792909] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.904 [2024-11-04 14:50:25.792933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.904 [2024-11-04 14:50:25.792943] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.904 [2024-11-04 14:50:25.792957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.163 "name": "Existed_Raid", 00:17:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.163 "strip_size_kb": 64, 00:17:56.163 "state": "configuring", 00:17:56.163 "raid_level": "concat", 00:17:56.163 "superblock": false, 00:17:56.163 "num_base_bdevs": 4, 00:17:56.163 "num_base_bdevs_discovered": 0, 00:17:56.163 "num_base_bdevs_operational": 4, 00:17:56.163 "base_bdevs_list": [ 00:17:56.163 { 00:17:56.163 "name": "BaseBdev1", 00:17:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.163 "is_configured": false, 00:17:56.163 "data_offset": 0, 00:17:56.163 "data_size": 0 00:17:56.163 }, 00:17:56.163 { 00:17:56.163 "name": "BaseBdev2", 00:17:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.163 "is_configured": false, 00:17:56.163 "data_offset": 0, 00:17:56.163 "data_size": 0 00:17:56.163 }, 00:17:56.163 { 00:17:56.163 "name": "BaseBdev3", 00:17:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.163 "is_configured": false, 00:17:56.163 "data_offset": 0, 00:17:56.163 "data_size": 0 00:17:56.163 }, 00:17:56.163 { 00:17:56.163 "name": "BaseBdev4", 00:17:56.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.163 "is_configured": false, 00:17:56.163 "data_offset": 0, 00:17:56.163 "data_size": 0 00:17:56.163 } 00:17:56.163 ] 00:17:56.163 }' 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.163 14:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.730 [2024-11-04 14:50:26.325034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.730 [2024-11-04 14:50:26.325111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.730 [2024-11-04 14:50:26.332950] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.730 [2024-11-04 14:50:26.333042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.730 [2024-11-04 14:50:26.333068] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.730 [2024-11-04 14:50:26.333090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.730 [2024-11-04 14:50:26.333100] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.730 [2024-11-04 14:50:26.333115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.730 [2024-11-04 14:50:26.333140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.730 [2024-11-04 14:50:26.333169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.730 [2024-11-04 14:50:26.389285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.730 BaseBdev1 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.730 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.730 [ 00:17:56.730 { 00:17:56.730 "name": "BaseBdev1", 00:17:56.730 "aliases": [ 00:17:56.730 "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f" 00:17:56.730 ], 00:17:56.730 "product_name": "Malloc disk", 00:17:56.730 "block_size": 512, 00:17:56.730 "num_blocks": 65536, 00:17:56.730 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:56.731 "assigned_rate_limits": { 00:17:56.731 "rw_ios_per_sec": 0, 00:17:56.731 "rw_mbytes_per_sec": 0, 00:17:56.731 "r_mbytes_per_sec": 0, 00:17:56.731 "w_mbytes_per_sec": 0 00:17:56.731 }, 00:17:56.731 "claimed": true, 00:17:56.731 "claim_type": "exclusive_write", 00:17:56.731 "zoned": false, 00:17:56.731 "supported_io_types": { 00:17:56.731 "read": true, 00:17:56.731 "write": true, 00:17:56.731 "unmap": true, 00:17:56.731 "flush": true, 00:17:56.731 "reset": true, 00:17:56.731 "nvme_admin": false, 00:17:56.731 "nvme_io": false, 00:17:56.731 "nvme_io_md": false, 00:17:56.731 "write_zeroes": true, 00:17:56.731 "zcopy": true, 00:17:56.731 "get_zone_info": false, 00:17:56.731 "zone_management": false, 00:17:56.731 "zone_append": false, 00:17:56.731 "compare": false, 00:17:56.731 "compare_and_write": false, 00:17:56.731 "abort": true, 00:17:56.731 "seek_hole": false, 00:17:56.731 "seek_data": false, 00:17:56.731 "copy": true, 00:17:56.731 "nvme_iov_md": false 00:17:56.731 }, 00:17:56.731 "memory_domains": [ 00:17:56.731 { 00:17:56.731 "dma_device_id": "system", 00:17:56.731 "dma_device_type": 1 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.731 "dma_device_type": 2 00:17:56.731 } 00:17:56.731 ], 00:17:56.731 "driver_specific": {} 00:17:56.731 } 00:17:56.731 ] 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.731 "name": "Existed_Raid", 00:17:56.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.731 "strip_size_kb": 64, 00:17:56.731 "state": "configuring", 00:17:56.731 "raid_level": "concat", 00:17:56.731 "superblock": false, 00:17:56.731 "num_base_bdevs": 4, 00:17:56.731 "num_base_bdevs_discovered": 1, 00:17:56.731 "num_base_bdevs_operational": 4, 00:17:56.731 "base_bdevs_list": [ 00:17:56.731 { 00:17:56.731 "name": "BaseBdev1", 00:17:56.731 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:56.731 "is_configured": true, 00:17:56.731 "data_offset": 0, 00:17:56.731 "data_size": 65536 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "name": "BaseBdev2", 00:17:56.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.731 "is_configured": false, 00:17:56.731 "data_offset": 0, 00:17:56.731 "data_size": 0 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "name": "BaseBdev3", 00:17:56.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.731 "is_configured": false, 00:17:56.731 "data_offset": 0, 00:17:56.731 "data_size": 0 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "name": "BaseBdev4", 00:17:56.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.731 "is_configured": false, 00:17:56.731 "data_offset": 0, 00:17:56.731 "data_size": 0 00:17:56.731 } 00:17:56.731 ] 00:17:56.731 }' 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.731 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.297 [2024-11-04 14:50:26.961644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.297 [2024-11-04 14:50:26.961726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.297 [2024-11-04 14:50:26.969701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.297 [2024-11-04 14:50:26.972690] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.297 [2024-11-04 14:50:26.972942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.297 [2024-11-04 14:50:26.972973] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.297 [2024-11-04 14:50:26.972994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.297 [2024-11-04 14:50:26.973004] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:57.297 [2024-11-04 14:50:26.973018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.297 14:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.297 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.297 "name": "Existed_Raid", 00:17:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.297 "strip_size_kb": 64, 00:17:57.297 "state": "configuring", 00:17:57.297 "raid_level": "concat", 00:17:57.297 "superblock": false, 00:17:57.297 "num_base_bdevs": 4, 00:17:57.297 "num_base_bdevs_discovered": 1, 00:17:57.297 "num_base_bdevs_operational": 4, 00:17:57.297 "base_bdevs_list": [ 00:17:57.297 { 00:17:57.297 "name": "BaseBdev1", 00:17:57.297 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:57.297 "is_configured": true, 00:17:57.297 "data_offset": 0, 00:17:57.297 "data_size": 65536 00:17:57.297 }, 00:17:57.297 { 00:17:57.297 "name": "BaseBdev2", 00:17:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.297 "is_configured": false, 00:17:57.297 "data_offset": 0, 00:17:57.297 "data_size": 0 00:17:57.297 }, 00:17:57.297 { 00:17:57.297 "name": "BaseBdev3", 00:17:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.297 "is_configured": false, 00:17:57.297 "data_offset": 0, 00:17:57.297 "data_size": 0 00:17:57.297 }, 00:17:57.297 { 00:17:57.297 "name": "BaseBdev4", 00:17:57.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.297 "is_configured": false, 00:17:57.297 "data_offset": 0, 00:17:57.297 "data_size": 0 00:17:57.297 } 00:17:57.297 ] 00:17:57.297 }' 00:17:57.297 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.297 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.864 [2024-11-04 14:50:27.540054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.864 BaseBdev2 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.864 [ 00:17:57.864 { 00:17:57.864 "name": "BaseBdev2", 00:17:57.864 "aliases": [ 00:17:57.864 "49fdadd1-b181-4e28-871c-3d09901074ad" 00:17:57.864 ], 00:17:57.864 "product_name": "Malloc disk", 00:17:57.864 "block_size": 512, 00:17:57.864 "num_blocks": 65536, 00:17:57.864 "uuid": "49fdadd1-b181-4e28-871c-3d09901074ad", 00:17:57.864 "assigned_rate_limits": { 00:17:57.864 "rw_ios_per_sec": 0, 00:17:57.864 "rw_mbytes_per_sec": 0, 00:17:57.864 "r_mbytes_per_sec": 0, 00:17:57.864 "w_mbytes_per_sec": 0 00:17:57.864 }, 00:17:57.864 "claimed": true, 00:17:57.864 "claim_type": "exclusive_write", 00:17:57.864 "zoned": false, 00:17:57.864 "supported_io_types": { 00:17:57.864 "read": true, 00:17:57.864 "write": true, 00:17:57.864 "unmap": true, 00:17:57.864 "flush": true, 00:17:57.864 "reset": true, 00:17:57.864 "nvme_admin": false, 00:17:57.864 "nvme_io": false, 00:17:57.864 "nvme_io_md": false, 00:17:57.864 "write_zeroes": true, 00:17:57.864 "zcopy": true, 00:17:57.864 "get_zone_info": false, 00:17:57.864 "zone_management": false, 00:17:57.864 "zone_append": false, 00:17:57.864 "compare": false, 00:17:57.864 "compare_and_write": false, 00:17:57.864 "abort": true, 00:17:57.864 "seek_hole": false, 00:17:57.864 "seek_data": false, 00:17:57.864 "copy": true, 00:17:57.864 "nvme_iov_md": false 00:17:57.864 }, 00:17:57.864 "memory_domains": [ 00:17:57.864 { 00:17:57.864 "dma_device_id": "system", 00:17:57.864 "dma_device_type": 1 00:17:57.864 }, 00:17:57.864 { 00:17:57.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.864 "dma_device_type": 2 00:17:57.864 } 00:17:57.864 ], 00:17:57.864 "driver_specific": {} 00:17:57.864 } 00:17:57.864 ] 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.864 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.865 "name": "Existed_Raid", 00:17:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.865 "strip_size_kb": 64, 00:17:57.865 "state": "configuring", 00:17:57.865 "raid_level": "concat", 00:17:57.865 "superblock": false, 00:17:57.865 "num_base_bdevs": 4, 00:17:57.865 "num_base_bdevs_discovered": 2, 00:17:57.865 "num_base_bdevs_operational": 4, 00:17:57.865 "base_bdevs_list": [ 00:17:57.865 { 00:17:57.865 "name": "BaseBdev1", 00:17:57.865 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:57.865 "is_configured": true, 00:17:57.865 "data_offset": 0, 00:17:57.865 "data_size": 65536 00:17:57.865 }, 00:17:57.865 { 00:17:57.865 "name": "BaseBdev2", 00:17:57.865 "uuid": "49fdadd1-b181-4e28-871c-3d09901074ad", 00:17:57.865 "is_configured": true, 00:17:57.865 "data_offset": 0, 00:17:57.865 "data_size": 65536 00:17:57.865 }, 00:17:57.865 { 00:17:57.865 "name": "BaseBdev3", 00:17:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.865 "is_configured": false, 00:17:57.865 "data_offset": 0, 00:17:57.865 "data_size": 0 00:17:57.865 }, 00:17:57.865 { 00:17:57.865 "name": "BaseBdev4", 00:17:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.865 "is_configured": false, 00:17:57.865 "data_offset": 0, 00:17:57.865 "data_size": 0 00:17:57.865 } 00:17:57.865 ] 00:17:57.865 }' 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.865 14:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.432 [2024-11-04 14:50:28.162056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.432 BaseBdev3 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.432 [ 00:17:58.432 { 00:17:58.432 "name": "BaseBdev3", 00:17:58.432 "aliases": [ 00:17:58.432 "af8116f1-3feb-4e8d-a293-2ac24ec13ef3" 00:17:58.432 ], 00:17:58.432 "product_name": "Malloc disk", 00:17:58.432 "block_size": 512, 00:17:58.432 "num_blocks": 65536, 00:17:58.432 "uuid": "af8116f1-3feb-4e8d-a293-2ac24ec13ef3", 00:17:58.432 "assigned_rate_limits": { 00:17:58.432 "rw_ios_per_sec": 0, 00:17:58.432 "rw_mbytes_per_sec": 0, 00:17:58.432 "r_mbytes_per_sec": 0, 00:17:58.432 "w_mbytes_per_sec": 0 00:17:58.432 }, 00:17:58.432 "claimed": true, 00:17:58.432 "claim_type": "exclusive_write", 00:17:58.432 "zoned": false, 00:17:58.432 "supported_io_types": { 00:17:58.432 "read": true, 00:17:58.432 "write": true, 00:17:58.432 "unmap": true, 00:17:58.432 "flush": true, 00:17:58.432 "reset": true, 00:17:58.432 "nvme_admin": false, 00:17:58.432 "nvme_io": false, 00:17:58.432 "nvme_io_md": false, 00:17:58.432 "write_zeroes": true, 00:17:58.432 "zcopy": true, 00:17:58.432 "get_zone_info": false, 00:17:58.432 "zone_management": false, 00:17:58.432 "zone_append": false, 00:17:58.432 "compare": false, 00:17:58.432 "compare_and_write": false, 00:17:58.432 "abort": true, 00:17:58.432 "seek_hole": false, 00:17:58.432 "seek_data": false, 00:17:58.432 "copy": true, 00:17:58.432 "nvme_iov_md": false 00:17:58.432 }, 00:17:58.432 "memory_domains": [ 00:17:58.432 { 00:17:58.432 "dma_device_id": "system", 00:17:58.432 "dma_device_type": 1 00:17:58.432 }, 00:17:58.432 { 00:17:58.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.432 "dma_device_type": 2 00:17:58.432 } 00:17:58.432 ], 00:17:58.432 "driver_specific": {} 00:17:58.432 } 00:17:58.432 ] 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.432 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.432 "name": "Existed_Raid", 00:17:58.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.432 "strip_size_kb": 64, 00:17:58.432 "state": "configuring", 00:17:58.432 "raid_level": "concat", 00:17:58.432 "superblock": false, 00:17:58.432 "num_base_bdevs": 4, 00:17:58.432 "num_base_bdevs_discovered": 3, 00:17:58.432 "num_base_bdevs_operational": 4, 00:17:58.432 "base_bdevs_list": [ 00:17:58.432 { 00:17:58.432 "name": "BaseBdev1", 00:17:58.432 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:58.432 "is_configured": true, 00:17:58.432 "data_offset": 0, 00:17:58.432 "data_size": 65536 00:17:58.432 }, 00:17:58.432 { 00:17:58.432 "name": "BaseBdev2", 00:17:58.432 "uuid": "49fdadd1-b181-4e28-871c-3d09901074ad", 00:17:58.432 "is_configured": true, 00:17:58.432 "data_offset": 0, 00:17:58.432 "data_size": 65536 00:17:58.432 }, 00:17:58.432 { 00:17:58.432 "name": "BaseBdev3", 00:17:58.432 "uuid": "af8116f1-3feb-4e8d-a293-2ac24ec13ef3", 00:17:58.432 "is_configured": true, 00:17:58.432 "data_offset": 0, 00:17:58.432 "data_size": 65536 00:17:58.432 }, 00:17:58.433 { 00:17:58.433 "name": "BaseBdev4", 00:17:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.433 "is_configured": false, 00:17:58.433 "data_offset": 0, 00:17:58.433 "data_size": 0 00:17:58.433 } 00:17:58.433 ] 00:17:58.433 }' 00:17:58.433 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.433 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.000 [2024-11-04 14:50:28.771020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:59.000 [2024-11-04 14:50:28.771103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:59.000 [2024-11-04 14:50:28.771125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:59.000 [2024-11-04 14:50:28.771582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.000 [2024-11-04 14:50:28.771841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:59.000 [2024-11-04 14:50:28.771865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:59.000 [2024-11-04 14:50:28.772220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.000 BaseBdev4 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.000 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.000 [ 00:17:59.000 { 00:17:59.000 "name": "BaseBdev4", 00:17:59.000 "aliases": [ 00:17:59.000 "4e9a8e63-f02c-45ec-a122-da13336eb639" 00:17:59.000 ], 00:17:59.000 "product_name": "Malloc disk", 00:17:59.000 "block_size": 512, 00:17:59.000 "num_blocks": 65536, 00:17:59.000 "uuid": "4e9a8e63-f02c-45ec-a122-da13336eb639", 00:17:59.000 "assigned_rate_limits": { 00:17:59.000 "rw_ios_per_sec": 0, 00:17:59.000 "rw_mbytes_per_sec": 0, 00:17:59.000 "r_mbytes_per_sec": 0, 00:17:59.000 "w_mbytes_per_sec": 0 00:17:59.000 }, 00:17:59.000 "claimed": true, 00:17:59.000 "claim_type": "exclusive_write", 00:17:59.000 "zoned": false, 00:17:59.000 "supported_io_types": { 00:17:59.000 "read": true, 00:17:59.000 "write": true, 00:17:59.000 "unmap": true, 00:17:59.000 "flush": true, 00:17:59.000 "reset": true, 00:17:59.000 "nvme_admin": false, 00:17:59.000 "nvme_io": false, 00:17:59.000 "nvme_io_md": false, 00:17:59.000 "write_zeroes": true, 00:17:59.000 "zcopy": true, 00:17:59.000 "get_zone_info": false, 00:17:59.000 "zone_management": false, 00:17:59.000 "zone_append": false, 00:17:59.000 "compare": false, 00:17:59.000 "compare_and_write": false, 00:17:59.000 "abort": true, 00:17:59.000 "seek_hole": false, 00:17:59.000 "seek_data": false, 00:17:59.001 "copy": true, 00:17:59.001 "nvme_iov_md": false 00:17:59.001 }, 00:17:59.001 "memory_domains": [ 00:17:59.001 { 00:17:59.001 "dma_device_id": "system", 00:17:59.001 "dma_device_type": 1 00:17:59.001 }, 00:17:59.001 { 00:17:59.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.001 "dma_device_type": 2 00:17:59.001 } 00:17:59.001 ], 00:17:59.001 "driver_specific": {} 00:17:59.001 } 00:17:59.001 ] 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.001 "name": "Existed_Raid", 00:17:59.001 "uuid": "24ba5226-03fe-4d8e-938c-2fede136d971", 00:17:59.001 "strip_size_kb": 64, 00:17:59.001 "state": "online", 00:17:59.001 "raid_level": "concat", 00:17:59.001 "superblock": false, 00:17:59.001 "num_base_bdevs": 4, 00:17:59.001 "num_base_bdevs_discovered": 4, 00:17:59.001 "num_base_bdevs_operational": 4, 00:17:59.001 "base_bdevs_list": [ 00:17:59.001 { 00:17:59.001 "name": "BaseBdev1", 00:17:59.001 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:59.001 "is_configured": true, 00:17:59.001 "data_offset": 0, 00:17:59.001 "data_size": 65536 00:17:59.001 }, 00:17:59.001 { 00:17:59.001 "name": "BaseBdev2", 00:17:59.001 "uuid": "49fdadd1-b181-4e28-871c-3d09901074ad", 00:17:59.001 "is_configured": true, 00:17:59.001 "data_offset": 0, 00:17:59.001 "data_size": 65536 00:17:59.001 }, 00:17:59.001 { 00:17:59.001 "name": "BaseBdev3", 00:17:59.001 "uuid": "af8116f1-3feb-4e8d-a293-2ac24ec13ef3", 00:17:59.001 "is_configured": true, 00:17:59.001 "data_offset": 0, 00:17:59.001 "data_size": 65536 00:17:59.001 }, 00:17:59.001 { 00:17:59.001 "name": "BaseBdev4", 00:17:59.001 "uuid": "4e9a8e63-f02c-45ec-a122-da13336eb639", 00:17:59.001 "is_configured": true, 00:17:59.001 "data_offset": 0, 00:17:59.001 "data_size": 65536 00:17:59.001 } 00:17:59.001 ] 00:17:59.001 }' 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.001 14:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.568 [2024-11-04 14:50:29.355865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.568 "name": "Existed_Raid", 00:17:59.568 "aliases": [ 00:17:59.568 "24ba5226-03fe-4d8e-938c-2fede136d971" 00:17:59.568 ], 00:17:59.568 "product_name": "Raid Volume", 00:17:59.568 "block_size": 512, 00:17:59.568 "num_blocks": 262144, 00:17:59.568 "uuid": "24ba5226-03fe-4d8e-938c-2fede136d971", 00:17:59.568 "assigned_rate_limits": { 00:17:59.568 "rw_ios_per_sec": 0, 00:17:59.568 "rw_mbytes_per_sec": 0, 00:17:59.568 "r_mbytes_per_sec": 0, 00:17:59.568 "w_mbytes_per_sec": 0 00:17:59.568 }, 00:17:59.568 "claimed": false, 00:17:59.568 "zoned": false, 00:17:59.568 "supported_io_types": { 00:17:59.568 "read": true, 00:17:59.568 "write": true, 00:17:59.568 "unmap": true, 00:17:59.568 "flush": true, 00:17:59.568 "reset": true, 00:17:59.568 "nvme_admin": false, 00:17:59.568 "nvme_io": false, 00:17:59.568 "nvme_io_md": false, 00:17:59.568 "write_zeroes": true, 00:17:59.568 "zcopy": false, 00:17:59.568 "get_zone_info": false, 00:17:59.568 "zone_management": false, 00:17:59.568 "zone_append": false, 00:17:59.568 "compare": false, 00:17:59.568 "compare_and_write": false, 00:17:59.568 "abort": false, 00:17:59.568 "seek_hole": false, 00:17:59.568 "seek_data": false, 00:17:59.568 "copy": false, 00:17:59.568 "nvme_iov_md": false 00:17:59.568 }, 00:17:59.568 "memory_domains": [ 00:17:59.568 { 00:17:59.568 "dma_device_id": "system", 00:17:59.568 "dma_device_type": 1 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.568 "dma_device_type": 2 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "system", 00:17:59.568 "dma_device_type": 1 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.568 "dma_device_type": 2 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "system", 00:17:59.568 "dma_device_type": 1 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.568 "dma_device_type": 2 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "system", 00:17:59.568 "dma_device_type": 1 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.568 "dma_device_type": 2 00:17:59.568 } 00:17:59.568 ], 00:17:59.568 "driver_specific": { 00:17:59.568 "raid": { 00:17:59.568 "uuid": "24ba5226-03fe-4d8e-938c-2fede136d971", 00:17:59.568 "strip_size_kb": 64, 00:17:59.568 "state": "online", 00:17:59.568 "raid_level": "concat", 00:17:59.568 "superblock": false, 00:17:59.568 "num_base_bdevs": 4, 00:17:59.568 "num_base_bdevs_discovered": 4, 00:17:59.568 "num_base_bdevs_operational": 4, 00:17:59.568 "base_bdevs_list": [ 00:17:59.568 { 00:17:59.568 "name": "BaseBdev1", 00:17:59.568 "uuid": "551c6be1-b92f-450b-8ac0-a95ffbd0aa0f", 00:17:59.568 "is_configured": true, 00:17:59.568 "data_offset": 0, 00:17:59.568 "data_size": 65536 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "name": "BaseBdev2", 00:17:59.568 "uuid": "49fdadd1-b181-4e28-871c-3d09901074ad", 00:17:59.568 "is_configured": true, 00:17:59.568 "data_offset": 0, 00:17:59.568 "data_size": 65536 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "name": "BaseBdev3", 00:17:59.568 "uuid": "af8116f1-3feb-4e8d-a293-2ac24ec13ef3", 00:17:59.568 "is_configured": true, 00:17:59.568 "data_offset": 0, 00:17:59.568 "data_size": 65536 00:17:59.568 }, 00:17:59.568 { 00:17:59.568 "name": "BaseBdev4", 00:17:59.568 "uuid": "4e9a8e63-f02c-45ec-a122-da13336eb639", 00:17:59.568 "is_configured": true, 00:17:59.568 "data_offset": 0, 00:17:59.568 "data_size": 65536 00:17:59.568 } 00:17:59.568 ] 00:17:59.568 } 00:17:59.568 } 00:17:59.568 }' 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:59.568 BaseBdev2 00:17:59.568 BaseBdev3 00:17:59.568 BaseBdev4' 00:17:59.568 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.827 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.827 [2024-11-04 14:50:29.715571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.828 [2024-11-04 14:50:29.715624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.828 [2024-11-04 14:50:29.715698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.086 "name": "Existed_Raid", 00:18:00.086 "uuid": "24ba5226-03fe-4d8e-938c-2fede136d971", 00:18:00.086 "strip_size_kb": 64, 00:18:00.086 "state": "offline", 00:18:00.086 "raid_level": "concat", 00:18:00.086 "superblock": false, 00:18:00.086 "num_base_bdevs": 4, 00:18:00.086 "num_base_bdevs_discovered": 3, 00:18:00.086 "num_base_bdevs_operational": 3, 00:18:00.086 "base_bdevs_list": [ 00:18:00.086 { 00:18:00.086 "name": null, 00:18:00.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.086 "is_configured": false, 00:18:00.086 "data_offset": 0, 00:18:00.086 "data_size": 65536 00:18:00.086 }, 00:18:00.086 { 00:18:00.086 "name": "BaseBdev2", 00:18:00.086 "uuid": "49fdadd1-b181-4e28-871c-3d09901074ad", 00:18:00.086 "is_configured": true, 00:18:00.086 "data_offset": 0, 00:18:00.086 "data_size": 65536 00:18:00.086 }, 00:18:00.086 { 00:18:00.086 "name": "BaseBdev3", 00:18:00.086 "uuid": "af8116f1-3feb-4e8d-a293-2ac24ec13ef3", 00:18:00.086 "is_configured": true, 00:18:00.086 "data_offset": 0, 00:18:00.086 "data_size": 65536 00:18:00.086 }, 00:18:00.086 { 00:18:00.086 "name": "BaseBdev4", 00:18:00.086 "uuid": "4e9a8e63-f02c-45ec-a122-da13336eb639", 00:18:00.086 "is_configured": true, 00:18:00.086 "data_offset": 0, 00:18:00.086 "data_size": 65536 00:18:00.086 } 00:18:00.086 ] 00:18:00.086 }' 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.086 14:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.653 [2024-11-04 14:50:30.411962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.653 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.912 [2024-11-04 14:50:30.572868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.912 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.912 [2024-11-04 14:50:30.740165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:00.912 [2024-11-04 14:50:30.740239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.171 BaseBdev2 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.171 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.172 [ 00:18:01.172 { 00:18:01.172 "name": "BaseBdev2", 00:18:01.172 "aliases": [ 00:18:01.172 "526aefd1-f8ca-4874-a098-a1de1107f751" 00:18:01.172 ], 00:18:01.172 "product_name": "Malloc disk", 00:18:01.172 "block_size": 512, 00:18:01.172 "num_blocks": 65536, 00:18:01.172 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:01.172 "assigned_rate_limits": { 00:18:01.172 "rw_ios_per_sec": 0, 00:18:01.172 "rw_mbytes_per_sec": 0, 00:18:01.172 "r_mbytes_per_sec": 0, 00:18:01.172 "w_mbytes_per_sec": 0 00:18:01.172 }, 00:18:01.172 "claimed": false, 00:18:01.172 "zoned": false, 00:18:01.172 "supported_io_types": { 00:18:01.172 "read": true, 00:18:01.172 "write": true, 00:18:01.172 "unmap": true, 00:18:01.172 "flush": true, 00:18:01.172 "reset": true, 00:18:01.172 "nvme_admin": false, 00:18:01.172 "nvme_io": false, 00:18:01.172 "nvme_io_md": false, 00:18:01.172 "write_zeroes": true, 00:18:01.172 "zcopy": true, 00:18:01.172 "get_zone_info": false, 00:18:01.172 "zone_management": false, 00:18:01.172 "zone_append": false, 00:18:01.172 "compare": false, 00:18:01.172 "compare_and_write": false, 00:18:01.172 "abort": true, 00:18:01.172 "seek_hole": false, 00:18:01.172 "seek_data": false, 00:18:01.172 "copy": true, 00:18:01.172 "nvme_iov_md": false 00:18:01.172 }, 00:18:01.172 "memory_domains": [ 00:18:01.172 { 00:18:01.172 "dma_device_id": "system", 00:18:01.172 "dma_device_type": 1 00:18:01.172 }, 00:18:01.172 { 00:18:01.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.172 "dma_device_type": 2 00:18:01.172 } 00:18:01.172 ], 00:18:01.172 "driver_specific": {} 00:18:01.172 } 00:18:01.172 ] 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.172 14:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.172 BaseBdev3 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.172 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.172 [ 00:18:01.172 { 00:18:01.172 "name": "BaseBdev3", 00:18:01.172 "aliases": [ 00:18:01.172 "31c4541e-5857-4fe3-9f95-4ae7ff1758de" 00:18:01.172 ], 00:18:01.172 "product_name": "Malloc disk", 00:18:01.172 "block_size": 512, 00:18:01.172 "num_blocks": 65536, 00:18:01.172 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:01.172 "assigned_rate_limits": { 00:18:01.172 "rw_ios_per_sec": 0, 00:18:01.172 "rw_mbytes_per_sec": 0, 00:18:01.172 "r_mbytes_per_sec": 0, 00:18:01.172 "w_mbytes_per_sec": 0 00:18:01.172 }, 00:18:01.172 "claimed": false, 00:18:01.172 "zoned": false, 00:18:01.172 "supported_io_types": { 00:18:01.172 "read": true, 00:18:01.172 "write": true, 00:18:01.172 "unmap": true, 00:18:01.172 "flush": true, 00:18:01.172 "reset": true, 00:18:01.172 "nvme_admin": false, 00:18:01.172 "nvme_io": false, 00:18:01.172 "nvme_io_md": false, 00:18:01.172 "write_zeroes": true, 00:18:01.443 "zcopy": true, 00:18:01.443 "get_zone_info": false, 00:18:01.443 "zone_management": false, 00:18:01.443 "zone_append": false, 00:18:01.443 "compare": false, 00:18:01.443 "compare_and_write": false, 00:18:01.443 "abort": true, 00:18:01.443 "seek_hole": false, 00:18:01.443 "seek_data": false, 00:18:01.443 "copy": true, 00:18:01.443 "nvme_iov_md": false 00:18:01.443 }, 00:18:01.443 "memory_domains": [ 00:18:01.443 { 00:18:01.443 "dma_device_id": "system", 00:18:01.443 "dma_device_type": 1 00:18:01.443 }, 00:18:01.443 { 00:18:01.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.443 "dma_device_type": 2 00:18:01.443 } 00:18:01.443 ], 00:18:01.443 "driver_specific": {} 00:18:01.443 } 00:18:01.443 ] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.443 BaseBdev4 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.443 [ 00:18:01.443 { 00:18:01.443 "name": "BaseBdev4", 00:18:01.443 "aliases": [ 00:18:01.443 "434be0bc-ae71-4032-b502-2fccb9e43115" 00:18:01.443 ], 00:18:01.443 "product_name": "Malloc disk", 00:18:01.443 "block_size": 512, 00:18:01.443 "num_blocks": 65536, 00:18:01.443 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:01.443 "assigned_rate_limits": { 00:18:01.443 "rw_ios_per_sec": 0, 00:18:01.443 "rw_mbytes_per_sec": 0, 00:18:01.443 "r_mbytes_per_sec": 0, 00:18:01.443 "w_mbytes_per_sec": 0 00:18:01.443 }, 00:18:01.443 "claimed": false, 00:18:01.443 "zoned": false, 00:18:01.443 "supported_io_types": { 00:18:01.443 "read": true, 00:18:01.443 "write": true, 00:18:01.443 "unmap": true, 00:18:01.443 "flush": true, 00:18:01.443 "reset": true, 00:18:01.443 "nvme_admin": false, 00:18:01.443 "nvme_io": false, 00:18:01.443 "nvme_io_md": false, 00:18:01.443 "write_zeroes": true, 00:18:01.443 "zcopy": true, 00:18:01.443 "get_zone_info": false, 00:18:01.443 "zone_management": false, 00:18:01.443 "zone_append": false, 00:18:01.443 "compare": false, 00:18:01.443 "compare_and_write": false, 00:18:01.443 "abort": true, 00:18:01.443 "seek_hole": false, 00:18:01.443 "seek_data": false, 00:18:01.443 "copy": true, 00:18:01.443 "nvme_iov_md": false 00:18:01.443 }, 00:18:01.443 "memory_domains": [ 00:18:01.443 { 00:18:01.443 "dma_device_id": "system", 00:18:01.443 "dma_device_type": 1 00:18:01.443 }, 00:18:01.443 { 00:18:01.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.443 "dma_device_type": 2 00:18:01.443 } 00:18:01.443 ], 00:18:01.443 "driver_specific": {} 00:18:01.443 } 00:18:01.443 ] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.443 [2024-11-04 14:50:31.150497] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.443 [2024-11-04 14:50:31.150698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.443 [2024-11-04 14:50:31.150880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.443 [2024-11-04 14:50:31.153601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.443 [2024-11-04 14:50:31.153814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.443 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.443 "name": "Existed_Raid", 00:18:01.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.443 "strip_size_kb": 64, 00:18:01.443 "state": "configuring", 00:18:01.443 "raid_level": "concat", 00:18:01.443 "superblock": false, 00:18:01.443 "num_base_bdevs": 4, 00:18:01.443 "num_base_bdevs_discovered": 3, 00:18:01.443 "num_base_bdevs_operational": 4, 00:18:01.443 "base_bdevs_list": [ 00:18:01.443 { 00:18:01.443 "name": "BaseBdev1", 00:18:01.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.444 "is_configured": false, 00:18:01.444 "data_offset": 0, 00:18:01.444 "data_size": 0 00:18:01.444 }, 00:18:01.444 { 00:18:01.444 "name": "BaseBdev2", 00:18:01.444 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:01.444 "is_configured": true, 00:18:01.444 "data_offset": 0, 00:18:01.444 "data_size": 65536 00:18:01.444 }, 00:18:01.444 { 00:18:01.444 "name": "BaseBdev3", 00:18:01.444 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:01.444 "is_configured": true, 00:18:01.444 "data_offset": 0, 00:18:01.444 "data_size": 65536 00:18:01.444 }, 00:18:01.444 { 00:18:01.444 "name": "BaseBdev4", 00:18:01.444 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:01.444 "is_configured": true, 00:18:01.444 "data_offset": 0, 00:18:01.444 "data_size": 65536 00:18:01.444 } 00:18:01.444 ] 00:18:01.444 }' 00:18:01.444 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.444 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.011 [2024-11-04 14:50:31.682774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.011 "name": "Existed_Raid", 00:18:02.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.011 "strip_size_kb": 64, 00:18:02.011 "state": "configuring", 00:18:02.011 "raid_level": "concat", 00:18:02.011 "superblock": false, 00:18:02.011 "num_base_bdevs": 4, 00:18:02.011 "num_base_bdevs_discovered": 2, 00:18:02.011 "num_base_bdevs_operational": 4, 00:18:02.011 "base_bdevs_list": [ 00:18:02.011 { 00:18:02.011 "name": "BaseBdev1", 00:18:02.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.011 "is_configured": false, 00:18:02.011 "data_offset": 0, 00:18:02.011 "data_size": 0 00:18:02.011 }, 00:18:02.011 { 00:18:02.011 "name": null, 00:18:02.011 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:02.011 "is_configured": false, 00:18:02.011 "data_offset": 0, 00:18:02.011 "data_size": 65536 00:18:02.011 }, 00:18:02.011 { 00:18:02.011 "name": "BaseBdev3", 00:18:02.011 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:02.011 "is_configured": true, 00:18:02.011 "data_offset": 0, 00:18:02.011 "data_size": 65536 00:18:02.011 }, 00:18:02.011 { 00:18:02.011 "name": "BaseBdev4", 00:18:02.011 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:02.011 "is_configured": true, 00:18:02.011 "data_offset": 0, 00:18:02.011 "data_size": 65536 00:18:02.011 } 00:18:02.011 ] 00:18:02.011 }' 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.011 14:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.578 [2024-11-04 14:50:32.292161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.578 BaseBdev1 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.578 [ 00:18:02.578 { 00:18:02.578 "name": "BaseBdev1", 00:18:02.578 "aliases": [ 00:18:02.578 "753a4995-1249-4044-bdf5-83ad1e76ea36" 00:18:02.578 ], 00:18:02.578 "product_name": "Malloc disk", 00:18:02.578 "block_size": 512, 00:18:02.578 "num_blocks": 65536, 00:18:02.578 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:02.578 "assigned_rate_limits": { 00:18:02.578 "rw_ios_per_sec": 0, 00:18:02.578 "rw_mbytes_per_sec": 0, 00:18:02.578 "r_mbytes_per_sec": 0, 00:18:02.578 "w_mbytes_per_sec": 0 00:18:02.578 }, 00:18:02.578 "claimed": true, 00:18:02.578 "claim_type": "exclusive_write", 00:18:02.578 "zoned": false, 00:18:02.578 "supported_io_types": { 00:18:02.578 "read": true, 00:18:02.578 "write": true, 00:18:02.578 "unmap": true, 00:18:02.578 "flush": true, 00:18:02.578 "reset": true, 00:18:02.578 "nvme_admin": false, 00:18:02.578 "nvme_io": false, 00:18:02.578 "nvme_io_md": false, 00:18:02.578 "write_zeroes": true, 00:18:02.578 "zcopy": true, 00:18:02.578 "get_zone_info": false, 00:18:02.578 "zone_management": false, 00:18:02.578 "zone_append": false, 00:18:02.578 "compare": false, 00:18:02.578 "compare_and_write": false, 00:18:02.578 "abort": true, 00:18:02.578 "seek_hole": false, 00:18:02.578 "seek_data": false, 00:18:02.578 "copy": true, 00:18:02.578 "nvme_iov_md": false 00:18:02.578 }, 00:18:02.578 "memory_domains": [ 00:18:02.578 { 00:18:02.578 "dma_device_id": "system", 00:18:02.578 "dma_device_type": 1 00:18:02.578 }, 00:18:02.578 { 00:18:02.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.578 "dma_device_type": 2 00:18:02.578 } 00:18:02.578 ], 00:18:02.578 "driver_specific": {} 00:18:02.578 } 00:18:02.578 ] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.578 "name": "Existed_Raid", 00:18:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.578 "strip_size_kb": 64, 00:18:02.578 "state": "configuring", 00:18:02.578 "raid_level": "concat", 00:18:02.578 "superblock": false, 00:18:02.578 "num_base_bdevs": 4, 00:18:02.578 "num_base_bdevs_discovered": 3, 00:18:02.578 "num_base_bdevs_operational": 4, 00:18:02.578 "base_bdevs_list": [ 00:18:02.578 { 00:18:02.578 "name": "BaseBdev1", 00:18:02.578 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:02.578 "is_configured": true, 00:18:02.578 "data_offset": 0, 00:18:02.578 "data_size": 65536 00:18:02.578 }, 00:18:02.578 { 00:18:02.578 "name": null, 00:18:02.578 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:02.578 "is_configured": false, 00:18:02.578 "data_offset": 0, 00:18:02.578 "data_size": 65536 00:18:02.578 }, 00:18:02.578 { 00:18:02.578 "name": "BaseBdev3", 00:18:02.578 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:02.578 "is_configured": true, 00:18:02.578 "data_offset": 0, 00:18:02.578 "data_size": 65536 00:18:02.578 }, 00:18:02.578 { 00:18:02.578 "name": "BaseBdev4", 00:18:02.578 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:02.578 "is_configured": true, 00:18:02.578 "data_offset": 0, 00:18:02.578 "data_size": 65536 00:18:02.578 } 00:18:02.578 ] 00:18:02.578 }' 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.578 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.145 [2024-11-04 14:50:32.916444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.145 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.145 "name": "Existed_Raid", 00:18:03.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.145 "strip_size_kb": 64, 00:18:03.145 "state": "configuring", 00:18:03.145 "raid_level": "concat", 00:18:03.145 "superblock": false, 00:18:03.145 "num_base_bdevs": 4, 00:18:03.145 "num_base_bdevs_discovered": 2, 00:18:03.145 "num_base_bdevs_operational": 4, 00:18:03.145 "base_bdevs_list": [ 00:18:03.145 { 00:18:03.145 "name": "BaseBdev1", 00:18:03.146 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:03.146 "is_configured": true, 00:18:03.146 "data_offset": 0, 00:18:03.146 "data_size": 65536 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "name": null, 00:18:03.146 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:03.146 "is_configured": false, 00:18:03.146 "data_offset": 0, 00:18:03.146 "data_size": 65536 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "name": null, 00:18:03.146 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:03.146 "is_configured": false, 00:18:03.146 "data_offset": 0, 00:18:03.146 "data_size": 65536 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "name": "BaseBdev4", 00:18:03.146 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:03.146 "is_configured": true, 00:18:03.146 "data_offset": 0, 00:18:03.146 "data_size": 65536 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }' 00:18:03.146 14:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.146 14:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 [2024-11-04 14:50:33.512811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.712 "name": "Existed_Raid", 00:18:03.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.712 "strip_size_kb": 64, 00:18:03.712 "state": "configuring", 00:18:03.712 "raid_level": "concat", 00:18:03.712 "superblock": false, 00:18:03.712 "num_base_bdevs": 4, 00:18:03.712 "num_base_bdevs_discovered": 3, 00:18:03.712 "num_base_bdevs_operational": 4, 00:18:03.712 "base_bdevs_list": [ 00:18:03.712 { 00:18:03.712 "name": "BaseBdev1", 00:18:03.712 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:03.712 "is_configured": true, 00:18:03.712 "data_offset": 0, 00:18:03.712 "data_size": 65536 00:18:03.712 }, 00:18:03.712 { 00:18:03.712 "name": null, 00:18:03.712 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:03.712 "is_configured": false, 00:18:03.712 "data_offset": 0, 00:18:03.712 "data_size": 65536 00:18:03.712 }, 00:18:03.712 { 00:18:03.712 "name": "BaseBdev3", 00:18:03.712 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:03.712 "is_configured": true, 00:18:03.712 "data_offset": 0, 00:18:03.712 "data_size": 65536 00:18:03.712 }, 00:18:03.712 { 00:18:03.712 "name": "BaseBdev4", 00:18:03.712 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:03.712 "is_configured": true, 00:18:03.712 "data_offset": 0, 00:18:03.712 "data_size": 65536 00:18:03.712 } 00:18:03.712 ] 00:18:03.712 }' 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.712 14:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.279 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.279 [2024-11-04 14:50:34.128994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.537 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.538 "name": "Existed_Raid", 00:18:04.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.538 "strip_size_kb": 64, 00:18:04.538 "state": "configuring", 00:18:04.538 "raid_level": "concat", 00:18:04.538 "superblock": false, 00:18:04.538 "num_base_bdevs": 4, 00:18:04.538 "num_base_bdevs_discovered": 2, 00:18:04.538 "num_base_bdevs_operational": 4, 00:18:04.538 "base_bdevs_list": [ 00:18:04.538 { 00:18:04.538 "name": null, 00:18:04.538 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:04.538 "is_configured": false, 00:18:04.538 "data_offset": 0, 00:18:04.538 "data_size": 65536 00:18:04.538 }, 00:18:04.538 { 00:18:04.538 "name": null, 00:18:04.538 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:04.538 "is_configured": false, 00:18:04.538 "data_offset": 0, 00:18:04.538 "data_size": 65536 00:18:04.538 }, 00:18:04.538 { 00:18:04.538 "name": "BaseBdev3", 00:18:04.538 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:04.538 "is_configured": true, 00:18:04.538 "data_offset": 0, 00:18:04.538 "data_size": 65536 00:18:04.538 }, 00:18:04.538 { 00:18:04.538 "name": "BaseBdev4", 00:18:04.538 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:04.538 "is_configured": true, 00:18:04.538 "data_offset": 0, 00:18:04.538 "data_size": 65536 00:18:04.538 } 00:18:04.538 ] 00:18:04.538 }' 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.538 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.110 [2024-11-04 14:50:34.805635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.110 "name": "Existed_Raid", 00:18:05.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.110 "strip_size_kb": 64, 00:18:05.110 "state": "configuring", 00:18:05.110 "raid_level": "concat", 00:18:05.110 "superblock": false, 00:18:05.110 "num_base_bdevs": 4, 00:18:05.110 "num_base_bdevs_discovered": 3, 00:18:05.110 "num_base_bdevs_operational": 4, 00:18:05.110 "base_bdevs_list": [ 00:18:05.110 { 00:18:05.110 "name": null, 00:18:05.110 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:05.110 "is_configured": false, 00:18:05.110 "data_offset": 0, 00:18:05.110 "data_size": 65536 00:18:05.110 }, 00:18:05.110 { 00:18:05.110 "name": "BaseBdev2", 00:18:05.110 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:05.110 "is_configured": true, 00:18:05.110 "data_offset": 0, 00:18:05.110 "data_size": 65536 00:18:05.110 }, 00:18:05.110 { 00:18:05.110 "name": "BaseBdev3", 00:18:05.110 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:05.110 "is_configured": true, 00:18:05.110 "data_offset": 0, 00:18:05.110 "data_size": 65536 00:18:05.110 }, 00:18:05.110 { 00:18:05.110 "name": "BaseBdev4", 00:18:05.110 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:05.110 "is_configured": true, 00:18:05.110 "data_offset": 0, 00:18:05.110 "data_size": 65536 00:18:05.110 } 00:18:05.110 ] 00:18:05.110 }' 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.110 14:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 753a4995-1249-4044-bdf5-83ad1e76ea36 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 [2024-11-04 14:50:35.471782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:05.678 [2024-11-04 14:50:35.471848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:05.678 [2024-11-04 14:50:35.471859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:05.678 [2024-11-04 14:50:35.472183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:05.678 [2024-11-04 14:50:35.472466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:05.678 [2024-11-04 14:50:35.472488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:05.678 [2024-11-04 14:50:35.472851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.678 NewBaseBdev 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.678 [ 00:18:05.678 { 00:18:05.678 "name": "NewBaseBdev", 00:18:05.678 "aliases": [ 00:18:05.678 "753a4995-1249-4044-bdf5-83ad1e76ea36" 00:18:05.678 ], 00:18:05.678 "product_name": "Malloc disk", 00:18:05.678 "block_size": 512, 00:18:05.678 "num_blocks": 65536, 00:18:05.678 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:05.678 "assigned_rate_limits": { 00:18:05.678 "rw_ios_per_sec": 0, 00:18:05.678 "rw_mbytes_per_sec": 0, 00:18:05.678 "r_mbytes_per_sec": 0, 00:18:05.678 "w_mbytes_per_sec": 0 00:18:05.678 }, 00:18:05.678 "claimed": true, 00:18:05.678 "claim_type": "exclusive_write", 00:18:05.678 "zoned": false, 00:18:05.678 "supported_io_types": { 00:18:05.678 "read": true, 00:18:05.678 "write": true, 00:18:05.678 "unmap": true, 00:18:05.678 "flush": true, 00:18:05.678 "reset": true, 00:18:05.678 "nvme_admin": false, 00:18:05.678 "nvme_io": false, 00:18:05.678 "nvme_io_md": false, 00:18:05.678 "write_zeroes": true, 00:18:05.678 "zcopy": true, 00:18:05.678 "get_zone_info": false, 00:18:05.678 "zone_management": false, 00:18:05.678 "zone_append": false, 00:18:05.678 "compare": false, 00:18:05.678 "compare_and_write": false, 00:18:05.678 "abort": true, 00:18:05.678 "seek_hole": false, 00:18:05.678 "seek_data": false, 00:18:05.678 "copy": true, 00:18:05.678 "nvme_iov_md": false 00:18:05.678 }, 00:18:05.678 "memory_domains": [ 00:18:05.678 { 00:18:05.678 "dma_device_id": "system", 00:18:05.678 "dma_device_type": 1 00:18:05.678 }, 00:18:05.678 { 00:18:05.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.678 "dma_device_type": 2 00:18:05.678 } 00:18:05.678 ], 00:18:05.678 "driver_specific": {} 00:18:05.678 } 00:18:05.678 ] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:05.678 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.679 "name": "Existed_Raid", 00:18:05.679 "uuid": "e3d7e7c6-5d51-49dc-ab0b-8edf71c3a656", 00:18:05.679 "strip_size_kb": 64, 00:18:05.679 "state": "online", 00:18:05.679 "raid_level": "concat", 00:18:05.679 "superblock": false, 00:18:05.679 "num_base_bdevs": 4, 00:18:05.679 "num_base_bdevs_discovered": 4, 00:18:05.679 "num_base_bdevs_operational": 4, 00:18:05.679 "base_bdevs_list": [ 00:18:05.679 { 00:18:05.679 "name": "NewBaseBdev", 00:18:05.679 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:05.679 "is_configured": true, 00:18:05.679 "data_offset": 0, 00:18:05.679 "data_size": 65536 00:18:05.679 }, 00:18:05.679 { 00:18:05.679 "name": "BaseBdev2", 00:18:05.679 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:05.679 "is_configured": true, 00:18:05.679 "data_offset": 0, 00:18:05.679 "data_size": 65536 00:18:05.679 }, 00:18:05.679 { 00:18:05.679 "name": "BaseBdev3", 00:18:05.679 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:05.679 "is_configured": true, 00:18:05.679 "data_offset": 0, 00:18:05.679 "data_size": 65536 00:18:05.679 }, 00:18:05.679 { 00:18:05.679 "name": "BaseBdev4", 00:18:05.679 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:05.679 "is_configured": true, 00:18:05.679 "data_offset": 0, 00:18:05.679 "data_size": 65536 00:18:05.679 } 00:18:05.679 ] 00:18:05.679 }' 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.679 14:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.246 [2024-11-04 14:50:36.056534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.246 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.246 "name": "Existed_Raid", 00:18:06.246 "aliases": [ 00:18:06.246 "e3d7e7c6-5d51-49dc-ab0b-8edf71c3a656" 00:18:06.246 ], 00:18:06.246 "product_name": "Raid Volume", 00:18:06.246 "block_size": 512, 00:18:06.246 "num_blocks": 262144, 00:18:06.246 "uuid": "e3d7e7c6-5d51-49dc-ab0b-8edf71c3a656", 00:18:06.246 "assigned_rate_limits": { 00:18:06.246 "rw_ios_per_sec": 0, 00:18:06.246 "rw_mbytes_per_sec": 0, 00:18:06.246 "r_mbytes_per_sec": 0, 00:18:06.246 "w_mbytes_per_sec": 0 00:18:06.246 }, 00:18:06.246 "claimed": false, 00:18:06.246 "zoned": false, 00:18:06.246 "supported_io_types": { 00:18:06.246 "read": true, 00:18:06.246 "write": true, 00:18:06.246 "unmap": true, 00:18:06.246 "flush": true, 00:18:06.246 "reset": true, 00:18:06.246 "nvme_admin": false, 00:18:06.246 "nvme_io": false, 00:18:06.246 "nvme_io_md": false, 00:18:06.246 "write_zeroes": true, 00:18:06.246 "zcopy": false, 00:18:06.246 "get_zone_info": false, 00:18:06.246 "zone_management": false, 00:18:06.246 "zone_append": false, 00:18:06.246 "compare": false, 00:18:06.246 "compare_and_write": false, 00:18:06.246 "abort": false, 00:18:06.246 "seek_hole": false, 00:18:06.246 "seek_data": false, 00:18:06.246 "copy": false, 00:18:06.246 "nvme_iov_md": false 00:18:06.246 }, 00:18:06.246 "memory_domains": [ 00:18:06.246 { 00:18:06.246 "dma_device_id": "system", 00:18:06.246 "dma_device_type": 1 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.246 "dma_device_type": 2 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "system", 00:18:06.246 "dma_device_type": 1 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.246 "dma_device_type": 2 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "system", 00:18:06.246 "dma_device_type": 1 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.246 "dma_device_type": 2 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "system", 00:18:06.246 "dma_device_type": 1 00:18:06.246 }, 00:18:06.246 { 00:18:06.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.246 "dma_device_type": 2 00:18:06.246 } 00:18:06.246 ], 00:18:06.246 "driver_specific": { 00:18:06.246 "raid": { 00:18:06.246 "uuid": "e3d7e7c6-5d51-49dc-ab0b-8edf71c3a656", 00:18:06.246 "strip_size_kb": 64, 00:18:06.246 "state": "online", 00:18:06.246 "raid_level": "concat", 00:18:06.246 "superblock": false, 00:18:06.246 "num_base_bdevs": 4, 00:18:06.246 "num_base_bdevs_discovered": 4, 00:18:06.246 "num_base_bdevs_operational": 4, 00:18:06.246 "base_bdevs_list": [ 00:18:06.246 { 00:18:06.246 "name": "NewBaseBdev", 00:18:06.246 "uuid": "753a4995-1249-4044-bdf5-83ad1e76ea36", 00:18:06.246 "is_configured": true, 00:18:06.246 "data_offset": 0, 00:18:06.246 "data_size": 65536 00:18:06.246 }, 00:18:06.246 { 00:18:06.247 "name": "BaseBdev2", 00:18:06.247 "uuid": "526aefd1-f8ca-4874-a098-a1de1107f751", 00:18:06.247 "is_configured": true, 00:18:06.247 "data_offset": 0, 00:18:06.247 "data_size": 65536 00:18:06.247 }, 00:18:06.247 { 00:18:06.247 "name": "BaseBdev3", 00:18:06.247 "uuid": "31c4541e-5857-4fe3-9f95-4ae7ff1758de", 00:18:06.247 "is_configured": true, 00:18:06.247 "data_offset": 0, 00:18:06.247 "data_size": 65536 00:18:06.247 }, 00:18:06.247 { 00:18:06.247 "name": "BaseBdev4", 00:18:06.247 "uuid": "434be0bc-ae71-4032-b502-2fccb9e43115", 00:18:06.247 "is_configured": true, 00:18:06.247 "data_offset": 0, 00:18:06.247 "data_size": 65536 00:18:06.247 } 00:18:06.247 ] 00:18:06.247 } 00:18:06.247 } 00:18:06.247 }' 00:18:06.247 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:06.506 BaseBdev2 00:18:06.506 BaseBdev3 00:18:06.506 BaseBdev4' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.506 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.764 [2024-11-04 14:50:36.420165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.764 [2024-11-04 14:50:36.420381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.764 [2024-11-04 14:50:36.420524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.764 [2024-11-04 14:50:36.420672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.764 [2024-11-04 14:50:36.420690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71534 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71534 ']' 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71534 00:18:06.764 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71534 00:18:06.765 killing process with pid 71534 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71534' 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71534 00:18:06.765 [2024-11-04 14:50:36.458133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.765 14:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71534 00:18:07.023 [2024-11-04 14:50:36.803204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.401 ************************************ 00:18:08.401 END TEST raid_state_function_test 00:18:08.401 ************************************ 00:18:08.401 14:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:08.401 00:18:08.401 real 0m13.288s 00:18:08.401 user 0m21.798s 00:18:08.401 sys 0m2.002s 00:18:08.401 14:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:08.401 14:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.402 14:50:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:08.402 14:50:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:08.402 14:50:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:08.402 14:50:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.402 ************************************ 00:18:08.402 START TEST raid_state_function_test_sb 00:18:08.402 ************************************ 00:18:08.402 14:50:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:18:08.402 14:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:18:08.402 14:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:08.402 14:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:08.402 14:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:08.402 14:50:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.402 Process raid pid: 72220 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72220 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72220' 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72220 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72220 ']' 00:18:08.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.402 14:50:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.402 [2024-11-04 14:50:38.120953] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:18:08.402 [2024-11-04 14:50:38.121332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.662 [2024-11-04 14:50:38.307840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.663 [2024-11-04 14:50:38.468567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.921 [2024-11-04 14:50:38.724740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.921 [2024-11-04 14:50:38.724821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.487 [2024-11-04 14:50:39.115441] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.487 [2024-11-04 14:50:39.115579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.487 [2024-11-04 14:50:39.115625] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.487 [2024-11-04 14:50:39.115675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.487 [2024-11-04 14:50:39.115694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.487 [2024-11-04 14:50:39.115720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.487 [2024-11-04 14:50:39.115738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:09.487 [2024-11-04 14:50:39.115763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.487 "name": "Existed_Raid", 00:18:09.487 "uuid": "84a0084a-5db2-48d8-90dc-abcde0dd8856", 00:18:09.487 "strip_size_kb": 64, 00:18:09.487 "state": "configuring", 00:18:09.487 "raid_level": "concat", 00:18:09.487 "superblock": true, 00:18:09.487 "num_base_bdevs": 4, 00:18:09.487 "num_base_bdevs_discovered": 0, 00:18:09.487 "num_base_bdevs_operational": 4, 00:18:09.487 "base_bdevs_list": [ 00:18:09.487 { 00:18:09.487 "name": "BaseBdev1", 00:18:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.487 "is_configured": false, 00:18:09.487 "data_offset": 0, 00:18:09.487 "data_size": 0 00:18:09.487 }, 00:18:09.487 { 00:18:09.487 "name": "BaseBdev2", 00:18:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.487 "is_configured": false, 00:18:09.487 "data_offset": 0, 00:18:09.487 "data_size": 0 00:18:09.487 }, 00:18:09.487 { 00:18:09.487 "name": "BaseBdev3", 00:18:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.487 "is_configured": false, 00:18:09.487 "data_offset": 0, 00:18:09.487 "data_size": 0 00:18:09.487 }, 00:18:09.487 { 00:18:09.487 "name": "BaseBdev4", 00:18:09.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.487 "is_configured": false, 00:18:09.487 "data_offset": 0, 00:18:09.487 "data_size": 0 00:18:09.487 } 00:18:09.487 ] 00:18:09.487 }' 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.487 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.746 [2024-11-04 14:50:39.599415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.746 [2024-11-04 14:50:39.599665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.746 [2024-11-04 14:50:39.607419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.746 [2024-11-04 14:50:39.607501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.746 [2024-11-04 14:50:39.607516] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.746 [2024-11-04 14:50:39.607533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.746 [2024-11-04 14:50:39.607541] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.746 [2024-11-04 14:50:39.607556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.746 [2024-11-04 14:50:39.607579] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:09.746 [2024-11-04 14:50:39.607608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.746 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.003 [2024-11-04 14:50:39.655952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.003 BaseBdev1 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.003 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.004 [ 00:18:10.004 { 00:18:10.004 "name": "BaseBdev1", 00:18:10.004 "aliases": [ 00:18:10.004 "8ad3d6f0-21c8-4bac-a844-b086bf7419d7" 00:18:10.004 ], 00:18:10.004 "product_name": "Malloc disk", 00:18:10.004 "block_size": 512, 00:18:10.004 "num_blocks": 65536, 00:18:10.004 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:10.004 "assigned_rate_limits": { 00:18:10.004 "rw_ios_per_sec": 0, 00:18:10.004 "rw_mbytes_per_sec": 0, 00:18:10.004 "r_mbytes_per_sec": 0, 00:18:10.004 "w_mbytes_per_sec": 0 00:18:10.004 }, 00:18:10.004 "claimed": true, 00:18:10.004 "claim_type": "exclusive_write", 00:18:10.004 "zoned": false, 00:18:10.004 "supported_io_types": { 00:18:10.004 "read": true, 00:18:10.004 "write": true, 00:18:10.004 "unmap": true, 00:18:10.004 "flush": true, 00:18:10.004 "reset": true, 00:18:10.004 "nvme_admin": false, 00:18:10.004 "nvme_io": false, 00:18:10.004 "nvme_io_md": false, 00:18:10.004 "write_zeroes": true, 00:18:10.004 "zcopy": true, 00:18:10.004 "get_zone_info": false, 00:18:10.004 "zone_management": false, 00:18:10.004 "zone_append": false, 00:18:10.004 "compare": false, 00:18:10.004 "compare_and_write": false, 00:18:10.004 "abort": true, 00:18:10.004 "seek_hole": false, 00:18:10.004 "seek_data": false, 00:18:10.004 "copy": true, 00:18:10.004 "nvme_iov_md": false 00:18:10.004 }, 00:18:10.004 "memory_domains": [ 00:18:10.004 { 00:18:10.004 "dma_device_id": "system", 00:18:10.004 "dma_device_type": 1 00:18:10.004 }, 00:18:10.004 { 00:18:10.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.004 "dma_device_type": 2 00:18:10.004 } 00:18:10.004 ], 00:18:10.004 "driver_specific": {} 00:18:10.004 } 00:18:10.004 ] 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.004 "name": "Existed_Raid", 00:18:10.004 "uuid": "9efb0e59-7767-4676-98fd-345bbcf60667", 00:18:10.004 "strip_size_kb": 64, 00:18:10.004 "state": "configuring", 00:18:10.004 "raid_level": "concat", 00:18:10.004 "superblock": true, 00:18:10.004 "num_base_bdevs": 4, 00:18:10.004 "num_base_bdevs_discovered": 1, 00:18:10.004 "num_base_bdevs_operational": 4, 00:18:10.004 "base_bdevs_list": [ 00:18:10.004 { 00:18:10.004 "name": "BaseBdev1", 00:18:10.004 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:10.004 "is_configured": true, 00:18:10.004 "data_offset": 2048, 00:18:10.004 "data_size": 63488 00:18:10.004 }, 00:18:10.004 { 00:18:10.004 "name": "BaseBdev2", 00:18:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.004 "is_configured": false, 00:18:10.004 "data_offset": 0, 00:18:10.004 "data_size": 0 00:18:10.004 }, 00:18:10.004 { 00:18:10.004 "name": "BaseBdev3", 00:18:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.004 "is_configured": false, 00:18:10.004 "data_offset": 0, 00:18:10.004 "data_size": 0 00:18:10.004 }, 00:18:10.004 { 00:18:10.004 "name": "BaseBdev4", 00:18:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.004 "is_configured": false, 00:18:10.004 "data_offset": 0, 00:18:10.004 "data_size": 0 00:18:10.004 } 00:18:10.004 ] 00:18:10.004 }' 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.004 14:50:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.570 [2024-11-04 14:50:40.204247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.570 [2024-11-04 14:50:40.204328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.570 [2024-11-04 14:50:40.212350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.570 [2024-11-04 14:50:40.215402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.570 [2024-11-04 14:50:40.215462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.570 [2024-11-04 14:50:40.215480] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.570 [2024-11-04 14:50:40.215517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.570 [2024-11-04 14:50:40.215528] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.570 [2024-11-04 14:50:40.215555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.570 "name": "Existed_Raid", 00:18:10.570 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:10.570 "strip_size_kb": 64, 00:18:10.570 "state": "configuring", 00:18:10.570 "raid_level": "concat", 00:18:10.570 "superblock": true, 00:18:10.570 "num_base_bdevs": 4, 00:18:10.570 "num_base_bdevs_discovered": 1, 00:18:10.570 "num_base_bdevs_operational": 4, 00:18:10.570 "base_bdevs_list": [ 00:18:10.570 { 00:18:10.570 "name": "BaseBdev1", 00:18:10.570 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:10.570 "is_configured": true, 00:18:10.570 "data_offset": 2048, 00:18:10.570 "data_size": 63488 00:18:10.570 }, 00:18:10.570 { 00:18:10.570 "name": "BaseBdev2", 00:18:10.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.570 "is_configured": false, 00:18:10.570 "data_offset": 0, 00:18:10.570 "data_size": 0 00:18:10.570 }, 00:18:10.570 { 00:18:10.570 "name": "BaseBdev3", 00:18:10.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.570 "is_configured": false, 00:18:10.570 "data_offset": 0, 00:18:10.570 "data_size": 0 00:18:10.570 }, 00:18:10.570 { 00:18:10.570 "name": "BaseBdev4", 00:18:10.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.570 "is_configured": false, 00:18:10.570 "data_offset": 0, 00:18:10.570 "data_size": 0 00:18:10.570 } 00:18:10.570 ] 00:18:10.570 }' 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.570 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.137 [2024-11-04 14:50:40.802202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.137 BaseBdev2 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.137 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.137 [ 00:18:11.137 { 00:18:11.137 "name": "BaseBdev2", 00:18:11.137 "aliases": [ 00:18:11.137 "3d5bb1d8-2354-45c2-8102-bb79be2f5033" 00:18:11.137 ], 00:18:11.137 "product_name": "Malloc disk", 00:18:11.138 "block_size": 512, 00:18:11.138 "num_blocks": 65536, 00:18:11.138 "uuid": "3d5bb1d8-2354-45c2-8102-bb79be2f5033", 00:18:11.138 "assigned_rate_limits": { 00:18:11.138 "rw_ios_per_sec": 0, 00:18:11.138 "rw_mbytes_per_sec": 0, 00:18:11.138 "r_mbytes_per_sec": 0, 00:18:11.138 "w_mbytes_per_sec": 0 00:18:11.138 }, 00:18:11.138 "claimed": true, 00:18:11.138 "claim_type": "exclusive_write", 00:18:11.138 "zoned": false, 00:18:11.138 "supported_io_types": { 00:18:11.138 "read": true, 00:18:11.138 "write": true, 00:18:11.138 "unmap": true, 00:18:11.138 "flush": true, 00:18:11.138 "reset": true, 00:18:11.138 "nvme_admin": false, 00:18:11.138 "nvme_io": false, 00:18:11.138 "nvme_io_md": false, 00:18:11.138 "write_zeroes": true, 00:18:11.138 "zcopy": true, 00:18:11.138 "get_zone_info": false, 00:18:11.138 "zone_management": false, 00:18:11.138 "zone_append": false, 00:18:11.138 "compare": false, 00:18:11.138 "compare_and_write": false, 00:18:11.138 "abort": true, 00:18:11.138 "seek_hole": false, 00:18:11.138 "seek_data": false, 00:18:11.138 "copy": true, 00:18:11.138 "nvme_iov_md": false 00:18:11.138 }, 00:18:11.138 "memory_domains": [ 00:18:11.138 { 00:18:11.138 "dma_device_id": "system", 00:18:11.138 "dma_device_type": 1 00:18:11.138 }, 00:18:11.138 { 00:18:11.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.138 "dma_device_type": 2 00:18:11.138 } 00:18:11.138 ], 00:18:11.138 "driver_specific": {} 00:18:11.138 } 00:18:11.138 ] 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.138 "name": "Existed_Raid", 00:18:11.138 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:11.138 "strip_size_kb": 64, 00:18:11.138 "state": "configuring", 00:18:11.138 "raid_level": "concat", 00:18:11.138 "superblock": true, 00:18:11.138 "num_base_bdevs": 4, 00:18:11.138 "num_base_bdevs_discovered": 2, 00:18:11.138 "num_base_bdevs_operational": 4, 00:18:11.138 "base_bdevs_list": [ 00:18:11.138 { 00:18:11.138 "name": "BaseBdev1", 00:18:11.138 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:11.138 "is_configured": true, 00:18:11.138 "data_offset": 2048, 00:18:11.138 "data_size": 63488 00:18:11.138 }, 00:18:11.138 { 00:18:11.138 "name": "BaseBdev2", 00:18:11.138 "uuid": "3d5bb1d8-2354-45c2-8102-bb79be2f5033", 00:18:11.138 "is_configured": true, 00:18:11.138 "data_offset": 2048, 00:18:11.138 "data_size": 63488 00:18:11.138 }, 00:18:11.138 { 00:18:11.138 "name": "BaseBdev3", 00:18:11.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.138 "is_configured": false, 00:18:11.138 "data_offset": 0, 00:18:11.138 "data_size": 0 00:18:11.138 }, 00:18:11.138 { 00:18:11.138 "name": "BaseBdev4", 00:18:11.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.138 "is_configured": false, 00:18:11.138 "data_offset": 0, 00:18:11.138 "data_size": 0 00:18:11.138 } 00:18:11.138 ] 00:18:11.138 }' 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.138 14:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 [2024-11-04 14:50:41.401854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.705 BaseBdev3 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 [ 00:18:11.705 { 00:18:11.705 "name": "BaseBdev3", 00:18:11.705 "aliases": [ 00:18:11.705 "6115646b-1bf7-4486-99fa-82a23d957ebe" 00:18:11.705 ], 00:18:11.705 "product_name": "Malloc disk", 00:18:11.705 "block_size": 512, 00:18:11.705 "num_blocks": 65536, 00:18:11.705 "uuid": "6115646b-1bf7-4486-99fa-82a23d957ebe", 00:18:11.705 "assigned_rate_limits": { 00:18:11.705 "rw_ios_per_sec": 0, 00:18:11.705 "rw_mbytes_per_sec": 0, 00:18:11.705 "r_mbytes_per_sec": 0, 00:18:11.705 "w_mbytes_per_sec": 0 00:18:11.705 }, 00:18:11.705 "claimed": true, 00:18:11.705 "claim_type": "exclusive_write", 00:18:11.705 "zoned": false, 00:18:11.705 "supported_io_types": { 00:18:11.705 "read": true, 00:18:11.705 "write": true, 00:18:11.705 "unmap": true, 00:18:11.705 "flush": true, 00:18:11.705 "reset": true, 00:18:11.705 "nvme_admin": false, 00:18:11.705 "nvme_io": false, 00:18:11.705 "nvme_io_md": false, 00:18:11.705 "write_zeroes": true, 00:18:11.705 "zcopy": true, 00:18:11.705 "get_zone_info": false, 00:18:11.705 "zone_management": false, 00:18:11.705 "zone_append": false, 00:18:11.705 "compare": false, 00:18:11.705 "compare_and_write": false, 00:18:11.705 "abort": true, 00:18:11.705 "seek_hole": false, 00:18:11.705 "seek_data": false, 00:18:11.705 "copy": true, 00:18:11.705 "nvme_iov_md": false 00:18:11.705 }, 00:18:11.705 "memory_domains": [ 00:18:11.705 { 00:18:11.705 "dma_device_id": "system", 00:18:11.705 "dma_device_type": 1 00:18:11.705 }, 00:18:11.705 { 00:18:11.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.705 "dma_device_type": 2 00:18:11.705 } 00:18:11.705 ], 00:18:11.705 "driver_specific": {} 00:18:11.705 } 00:18:11.705 ] 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.705 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.706 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.706 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.706 "name": "Existed_Raid", 00:18:11.706 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:11.706 "strip_size_kb": 64, 00:18:11.706 "state": "configuring", 00:18:11.706 "raid_level": "concat", 00:18:11.706 "superblock": true, 00:18:11.706 "num_base_bdevs": 4, 00:18:11.706 "num_base_bdevs_discovered": 3, 00:18:11.706 "num_base_bdevs_operational": 4, 00:18:11.706 "base_bdevs_list": [ 00:18:11.706 { 00:18:11.706 "name": "BaseBdev1", 00:18:11.706 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:11.706 "is_configured": true, 00:18:11.706 "data_offset": 2048, 00:18:11.706 "data_size": 63488 00:18:11.706 }, 00:18:11.706 { 00:18:11.706 "name": "BaseBdev2", 00:18:11.706 "uuid": "3d5bb1d8-2354-45c2-8102-bb79be2f5033", 00:18:11.706 "is_configured": true, 00:18:11.706 "data_offset": 2048, 00:18:11.706 "data_size": 63488 00:18:11.706 }, 00:18:11.706 { 00:18:11.706 "name": "BaseBdev3", 00:18:11.706 "uuid": "6115646b-1bf7-4486-99fa-82a23d957ebe", 00:18:11.706 "is_configured": true, 00:18:11.706 "data_offset": 2048, 00:18:11.706 "data_size": 63488 00:18:11.706 }, 00:18:11.706 { 00:18:11.706 "name": "BaseBdev4", 00:18:11.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.706 "is_configured": false, 00:18:11.706 "data_offset": 0, 00:18:11.706 "data_size": 0 00:18:11.706 } 00:18:11.706 ] 00:18:11.706 }' 00:18:11.706 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.706 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.272 14:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:12.272 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.272 14:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.272 [2024-11-04 14:50:42.012862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:12.272 BaseBdev4 00:18:12.272 [2024-11-04 14:50:42.013584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:12.272 [2024-11-04 14:50:42.013612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:12.272 [2024-11-04 14:50:42.014034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.272 [2024-11-04 14:50:42.014310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:12.272 [2024-11-04 14:50:42.014392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:12.272 [2024-11-04 14:50:42.014592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.272 [ 00:18:12.272 { 00:18:12.272 "name": "BaseBdev4", 00:18:12.272 "aliases": [ 00:18:12.272 "27129421-a310-4d98-8da8-8134681fbdaa" 00:18:12.272 ], 00:18:12.272 "product_name": "Malloc disk", 00:18:12.272 "block_size": 512, 00:18:12.272 "num_blocks": 65536, 00:18:12.272 "uuid": "27129421-a310-4d98-8da8-8134681fbdaa", 00:18:12.272 "assigned_rate_limits": { 00:18:12.272 "rw_ios_per_sec": 0, 00:18:12.272 "rw_mbytes_per_sec": 0, 00:18:12.272 "r_mbytes_per_sec": 0, 00:18:12.272 "w_mbytes_per_sec": 0 00:18:12.272 }, 00:18:12.272 "claimed": true, 00:18:12.272 "claim_type": "exclusive_write", 00:18:12.272 "zoned": false, 00:18:12.272 "supported_io_types": { 00:18:12.272 "read": true, 00:18:12.272 "write": true, 00:18:12.272 "unmap": true, 00:18:12.272 "flush": true, 00:18:12.272 "reset": true, 00:18:12.272 "nvme_admin": false, 00:18:12.272 "nvme_io": false, 00:18:12.272 "nvme_io_md": false, 00:18:12.272 "write_zeroes": true, 00:18:12.272 "zcopy": true, 00:18:12.272 "get_zone_info": false, 00:18:12.272 "zone_management": false, 00:18:12.272 "zone_append": false, 00:18:12.272 "compare": false, 00:18:12.272 "compare_and_write": false, 00:18:12.272 "abort": true, 00:18:12.272 "seek_hole": false, 00:18:12.272 "seek_data": false, 00:18:12.272 "copy": true, 00:18:12.272 "nvme_iov_md": false 00:18:12.272 }, 00:18:12.272 "memory_domains": [ 00:18:12.272 { 00:18:12.272 "dma_device_id": "system", 00:18:12.272 "dma_device_type": 1 00:18:12.272 }, 00:18:12.272 { 00:18:12.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.272 "dma_device_type": 2 00:18:12.272 } 00:18:12.272 ], 00:18:12.272 "driver_specific": {} 00:18:12.272 } 00:18:12.272 ] 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.272 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.272 "name": "Existed_Raid", 00:18:12.273 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:12.273 "strip_size_kb": 64, 00:18:12.273 "state": "online", 00:18:12.273 "raid_level": "concat", 00:18:12.273 "superblock": true, 00:18:12.273 "num_base_bdevs": 4, 00:18:12.273 "num_base_bdevs_discovered": 4, 00:18:12.273 "num_base_bdevs_operational": 4, 00:18:12.273 "base_bdevs_list": [ 00:18:12.273 { 00:18:12.273 "name": "BaseBdev1", 00:18:12.273 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:12.273 "is_configured": true, 00:18:12.273 "data_offset": 2048, 00:18:12.273 "data_size": 63488 00:18:12.273 }, 00:18:12.273 { 00:18:12.273 "name": "BaseBdev2", 00:18:12.273 "uuid": "3d5bb1d8-2354-45c2-8102-bb79be2f5033", 00:18:12.273 "is_configured": true, 00:18:12.273 "data_offset": 2048, 00:18:12.273 "data_size": 63488 00:18:12.273 }, 00:18:12.273 { 00:18:12.273 "name": "BaseBdev3", 00:18:12.273 "uuid": "6115646b-1bf7-4486-99fa-82a23d957ebe", 00:18:12.273 "is_configured": true, 00:18:12.273 "data_offset": 2048, 00:18:12.273 "data_size": 63488 00:18:12.273 }, 00:18:12.273 { 00:18:12.273 "name": "BaseBdev4", 00:18:12.273 "uuid": "27129421-a310-4d98-8da8-8134681fbdaa", 00:18:12.273 "is_configured": true, 00:18:12.273 "data_offset": 2048, 00:18:12.273 "data_size": 63488 00:18:12.273 } 00:18:12.273 ] 00:18:12.273 }' 00:18:12.273 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.273 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:12.839 [2024-11-04 14:50:42.585660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.839 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.839 "name": "Existed_Raid", 00:18:12.839 "aliases": [ 00:18:12.839 "ace9e85c-4f33-4471-ae8d-24920c326bec" 00:18:12.839 ], 00:18:12.839 "product_name": "Raid Volume", 00:18:12.839 "block_size": 512, 00:18:12.839 "num_blocks": 253952, 00:18:12.839 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:12.839 "assigned_rate_limits": { 00:18:12.839 "rw_ios_per_sec": 0, 00:18:12.839 "rw_mbytes_per_sec": 0, 00:18:12.839 "r_mbytes_per_sec": 0, 00:18:12.839 "w_mbytes_per_sec": 0 00:18:12.839 }, 00:18:12.839 "claimed": false, 00:18:12.839 "zoned": false, 00:18:12.839 "supported_io_types": { 00:18:12.839 "read": true, 00:18:12.839 "write": true, 00:18:12.839 "unmap": true, 00:18:12.839 "flush": true, 00:18:12.839 "reset": true, 00:18:12.839 "nvme_admin": false, 00:18:12.839 "nvme_io": false, 00:18:12.839 "nvme_io_md": false, 00:18:12.839 "write_zeroes": true, 00:18:12.839 "zcopy": false, 00:18:12.839 "get_zone_info": false, 00:18:12.839 "zone_management": false, 00:18:12.839 "zone_append": false, 00:18:12.839 "compare": false, 00:18:12.839 "compare_and_write": false, 00:18:12.839 "abort": false, 00:18:12.839 "seek_hole": false, 00:18:12.839 "seek_data": false, 00:18:12.839 "copy": false, 00:18:12.839 "nvme_iov_md": false 00:18:12.839 }, 00:18:12.839 "memory_domains": [ 00:18:12.839 { 00:18:12.839 "dma_device_id": "system", 00:18:12.839 "dma_device_type": 1 00:18:12.839 }, 00:18:12.839 { 00:18:12.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.839 "dma_device_type": 2 00:18:12.839 }, 00:18:12.840 { 00:18:12.840 "dma_device_id": "system", 00:18:12.840 "dma_device_type": 1 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.840 "dma_device_type": 2 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "dma_device_id": "system", 00:18:12.840 "dma_device_type": 1 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.840 "dma_device_type": 2 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "dma_device_id": "system", 00:18:12.840 "dma_device_type": 1 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.840 "dma_device_type": 2 00:18:12.840 } 00:18:12.840 ], 00:18:12.840 "driver_specific": { 00:18:12.840 "raid": { 00:18:12.840 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:12.840 "strip_size_kb": 64, 00:18:12.840 "state": "online", 00:18:12.840 "raid_level": "concat", 00:18:12.840 "superblock": true, 00:18:12.840 "num_base_bdevs": 4, 00:18:12.840 "num_base_bdevs_discovered": 4, 00:18:12.840 "num_base_bdevs_operational": 4, 00:18:12.840 "base_bdevs_list": [ 00:18:12.840 { 00:18:12.840 "name": "BaseBdev1", 00:18:12.840 "uuid": "8ad3d6f0-21c8-4bac-a844-b086bf7419d7", 00:18:12.840 "is_configured": true, 00:18:12.840 "data_offset": 2048, 00:18:12.840 "data_size": 63488 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "name": "BaseBdev2", 00:18:12.840 "uuid": "3d5bb1d8-2354-45c2-8102-bb79be2f5033", 00:18:12.840 "is_configured": true, 00:18:12.840 "data_offset": 2048, 00:18:12.840 "data_size": 63488 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "name": "BaseBdev3", 00:18:12.840 "uuid": "6115646b-1bf7-4486-99fa-82a23d957ebe", 00:18:12.840 "is_configured": true, 00:18:12.840 "data_offset": 2048, 00:18:12.840 "data_size": 63488 00:18:12.840 }, 00:18:12.840 { 00:18:12.840 "name": "BaseBdev4", 00:18:12.840 "uuid": "27129421-a310-4d98-8da8-8134681fbdaa", 00:18:12.840 "is_configured": true, 00:18:12.840 "data_offset": 2048, 00:18:12.840 "data_size": 63488 00:18:12.840 } 00:18:12.840 ] 00:18:12.840 } 00:18:12.840 } 00:18:12.840 }' 00:18:12.840 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.840 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:12.840 BaseBdev2 00:18:12.840 BaseBdev3 00:18:12.840 BaseBdev4' 00:18:12.840 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.840 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:12.840 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.098 14:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.098 [2024-11-04 14:50:42.977399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.098 [2024-11-04 14:50:42.977581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.098 [2024-11-04 14:50:42.977808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.357 "name": "Existed_Raid", 00:18:13.357 "uuid": "ace9e85c-4f33-4471-ae8d-24920c326bec", 00:18:13.357 "strip_size_kb": 64, 00:18:13.357 "state": "offline", 00:18:13.357 "raid_level": "concat", 00:18:13.357 "superblock": true, 00:18:13.357 "num_base_bdevs": 4, 00:18:13.357 "num_base_bdevs_discovered": 3, 00:18:13.357 "num_base_bdevs_operational": 3, 00:18:13.357 "base_bdevs_list": [ 00:18:13.357 { 00:18:13.357 "name": null, 00:18:13.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.357 "is_configured": false, 00:18:13.357 "data_offset": 0, 00:18:13.357 "data_size": 63488 00:18:13.357 }, 00:18:13.357 { 00:18:13.357 "name": "BaseBdev2", 00:18:13.357 "uuid": "3d5bb1d8-2354-45c2-8102-bb79be2f5033", 00:18:13.357 "is_configured": true, 00:18:13.357 "data_offset": 2048, 00:18:13.357 "data_size": 63488 00:18:13.357 }, 00:18:13.357 { 00:18:13.357 "name": "BaseBdev3", 00:18:13.357 "uuid": "6115646b-1bf7-4486-99fa-82a23d957ebe", 00:18:13.357 "is_configured": true, 00:18:13.357 "data_offset": 2048, 00:18:13.357 "data_size": 63488 00:18:13.357 }, 00:18:13.357 { 00:18:13.357 "name": "BaseBdev4", 00:18:13.357 "uuid": "27129421-a310-4d98-8da8-8134681fbdaa", 00:18:13.357 "is_configured": true, 00:18:13.357 "data_offset": 2048, 00:18:13.357 "data_size": 63488 00:18:13.357 } 00:18:13.357 ] 00:18:13.357 }' 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.357 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.926 [2024-11-04 14:50:43.645248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.926 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.926 [2024-11-04 14:50:43.793902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.184 14:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.184 [2024-11-04 14:50:43.951434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:14.184 [2024-11-04 14:50:43.951632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.184 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.443 BaseBdev2 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.443 [ 00:18:14.443 { 00:18:14.443 "name": "BaseBdev2", 00:18:14.443 "aliases": [ 00:18:14.443 "ad48ad9c-e006-4940-bfb5-23cca0d48acd" 00:18:14.443 ], 00:18:14.443 "product_name": "Malloc disk", 00:18:14.443 "block_size": 512, 00:18:14.443 "num_blocks": 65536, 00:18:14.443 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:14.443 "assigned_rate_limits": { 00:18:14.443 "rw_ios_per_sec": 0, 00:18:14.443 "rw_mbytes_per_sec": 0, 00:18:14.443 "r_mbytes_per_sec": 0, 00:18:14.443 "w_mbytes_per_sec": 0 00:18:14.443 }, 00:18:14.443 "claimed": false, 00:18:14.443 "zoned": false, 00:18:14.443 "supported_io_types": { 00:18:14.443 "read": true, 00:18:14.443 "write": true, 00:18:14.443 "unmap": true, 00:18:14.443 "flush": true, 00:18:14.443 "reset": true, 00:18:14.443 "nvme_admin": false, 00:18:14.443 "nvme_io": false, 00:18:14.443 "nvme_io_md": false, 00:18:14.443 "write_zeroes": true, 00:18:14.443 "zcopy": true, 00:18:14.443 "get_zone_info": false, 00:18:14.443 "zone_management": false, 00:18:14.443 "zone_append": false, 00:18:14.443 "compare": false, 00:18:14.443 "compare_and_write": false, 00:18:14.443 "abort": true, 00:18:14.443 "seek_hole": false, 00:18:14.443 "seek_data": false, 00:18:14.443 "copy": true, 00:18:14.443 "nvme_iov_md": false 00:18:14.443 }, 00:18:14.443 "memory_domains": [ 00:18:14.443 { 00:18:14.443 "dma_device_id": "system", 00:18:14.443 "dma_device_type": 1 00:18:14.443 }, 00:18:14.443 { 00:18:14.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.443 "dma_device_type": 2 00:18:14.443 } 00:18:14.443 ], 00:18:14.443 "driver_specific": {} 00:18:14.443 } 00:18:14.443 ] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.443 BaseBdev3 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.443 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.443 [ 00:18:14.443 { 00:18:14.443 "name": "BaseBdev3", 00:18:14.443 "aliases": [ 00:18:14.443 "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0" 00:18:14.443 ], 00:18:14.443 "product_name": "Malloc disk", 00:18:14.443 "block_size": 512, 00:18:14.443 "num_blocks": 65536, 00:18:14.443 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:14.443 "assigned_rate_limits": { 00:18:14.443 "rw_ios_per_sec": 0, 00:18:14.443 "rw_mbytes_per_sec": 0, 00:18:14.443 "r_mbytes_per_sec": 0, 00:18:14.443 "w_mbytes_per_sec": 0 00:18:14.443 }, 00:18:14.443 "claimed": false, 00:18:14.443 "zoned": false, 00:18:14.443 "supported_io_types": { 00:18:14.443 "read": true, 00:18:14.443 "write": true, 00:18:14.443 "unmap": true, 00:18:14.443 "flush": true, 00:18:14.443 "reset": true, 00:18:14.443 "nvme_admin": false, 00:18:14.443 "nvme_io": false, 00:18:14.443 "nvme_io_md": false, 00:18:14.443 "write_zeroes": true, 00:18:14.443 "zcopy": true, 00:18:14.443 "get_zone_info": false, 00:18:14.443 "zone_management": false, 00:18:14.443 "zone_append": false, 00:18:14.443 "compare": false, 00:18:14.443 "compare_and_write": false, 00:18:14.443 "abort": true, 00:18:14.443 "seek_hole": false, 00:18:14.443 "seek_data": false, 00:18:14.443 "copy": true, 00:18:14.443 "nvme_iov_md": false 00:18:14.443 }, 00:18:14.443 "memory_domains": [ 00:18:14.443 { 00:18:14.443 "dma_device_id": "system", 00:18:14.443 "dma_device_type": 1 00:18:14.443 }, 00:18:14.443 { 00:18:14.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.443 "dma_device_type": 2 00:18:14.443 } 00:18:14.443 ], 00:18:14.443 "driver_specific": {} 00:18:14.443 } 00:18:14.443 ] 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.444 BaseBdev4 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.444 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.444 [ 00:18:14.444 { 00:18:14.444 "name": "BaseBdev4", 00:18:14.444 "aliases": [ 00:18:14.444 "9914bae1-2d79-40f4-92f6-4414b7f2025f" 00:18:14.444 ], 00:18:14.444 "product_name": "Malloc disk", 00:18:14.444 "block_size": 512, 00:18:14.444 "num_blocks": 65536, 00:18:14.444 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:14.444 "assigned_rate_limits": { 00:18:14.444 "rw_ios_per_sec": 0, 00:18:14.444 "rw_mbytes_per_sec": 0, 00:18:14.444 "r_mbytes_per_sec": 0, 00:18:14.444 "w_mbytes_per_sec": 0 00:18:14.444 }, 00:18:14.444 "claimed": false, 00:18:14.444 "zoned": false, 00:18:14.444 "supported_io_types": { 00:18:14.444 "read": true, 00:18:14.444 "write": true, 00:18:14.444 "unmap": true, 00:18:14.444 "flush": true, 00:18:14.444 "reset": true, 00:18:14.444 "nvme_admin": false, 00:18:14.444 "nvme_io": false, 00:18:14.444 "nvme_io_md": false, 00:18:14.444 "write_zeroes": true, 00:18:14.444 "zcopy": true, 00:18:14.444 "get_zone_info": false, 00:18:14.444 "zone_management": false, 00:18:14.702 "zone_append": false, 00:18:14.702 "compare": false, 00:18:14.702 "compare_and_write": false, 00:18:14.702 "abort": true, 00:18:14.702 "seek_hole": false, 00:18:14.702 "seek_data": false, 00:18:14.702 "copy": true, 00:18:14.702 "nvme_iov_md": false 00:18:14.703 }, 00:18:14.703 "memory_domains": [ 00:18:14.703 { 00:18:14.703 "dma_device_id": "system", 00:18:14.703 "dma_device_type": 1 00:18:14.703 }, 00:18:14.703 { 00:18:14.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.703 "dma_device_type": 2 00:18:14.703 } 00:18:14.703 ], 00:18:14.703 "driver_specific": {} 00:18:14.703 } 00:18:14.703 ] 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.703 [2024-11-04 14:50:44.343331] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.703 [2024-11-04 14:50:44.343545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.703 [2024-11-04 14:50:44.343595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.703 [2024-11-04 14:50:44.346361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.703 [2024-11-04 14:50:44.346438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.703 "name": "Existed_Raid", 00:18:14.703 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:14.703 "strip_size_kb": 64, 00:18:14.703 "state": "configuring", 00:18:14.703 "raid_level": "concat", 00:18:14.703 "superblock": true, 00:18:14.703 "num_base_bdevs": 4, 00:18:14.703 "num_base_bdevs_discovered": 3, 00:18:14.703 "num_base_bdevs_operational": 4, 00:18:14.703 "base_bdevs_list": [ 00:18:14.703 { 00:18:14.703 "name": "BaseBdev1", 00:18:14.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.703 "is_configured": false, 00:18:14.703 "data_offset": 0, 00:18:14.703 "data_size": 0 00:18:14.703 }, 00:18:14.703 { 00:18:14.703 "name": "BaseBdev2", 00:18:14.703 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:14.703 "is_configured": true, 00:18:14.703 "data_offset": 2048, 00:18:14.703 "data_size": 63488 00:18:14.703 }, 00:18:14.703 { 00:18:14.703 "name": "BaseBdev3", 00:18:14.703 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:14.703 "is_configured": true, 00:18:14.703 "data_offset": 2048, 00:18:14.703 "data_size": 63488 00:18:14.703 }, 00:18:14.703 { 00:18:14.703 "name": "BaseBdev4", 00:18:14.703 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:14.703 "is_configured": true, 00:18:14.703 "data_offset": 2048, 00:18:14.703 "data_size": 63488 00:18:14.703 } 00:18:14.703 ] 00:18:14.703 }' 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.703 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.270 [2024-11-04 14:50:44.875599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.270 "name": "Existed_Raid", 00:18:15.270 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:15.270 "strip_size_kb": 64, 00:18:15.270 "state": "configuring", 00:18:15.270 "raid_level": "concat", 00:18:15.270 "superblock": true, 00:18:15.270 "num_base_bdevs": 4, 00:18:15.270 "num_base_bdevs_discovered": 2, 00:18:15.270 "num_base_bdevs_operational": 4, 00:18:15.270 "base_bdevs_list": [ 00:18:15.270 { 00:18:15.270 "name": "BaseBdev1", 00:18:15.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.270 "is_configured": false, 00:18:15.270 "data_offset": 0, 00:18:15.270 "data_size": 0 00:18:15.270 }, 00:18:15.270 { 00:18:15.270 "name": null, 00:18:15.270 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:15.270 "is_configured": false, 00:18:15.270 "data_offset": 0, 00:18:15.270 "data_size": 63488 00:18:15.270 }, 00:18:15.270 { 00:18:15.270 "name": "BaseBdev3", 00:18:15.270 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:15.270 "is_configured": true, 00:18:15.270 "data_offset": 2048, 00:18:15.270 "data_size": 63488 00:18:15.270 }, 00:18:15.270 { 00:18:15.270 "name": "BaseBdev4", 00:18:15.270 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:15.270 "is_configured": true, 00:18:15.270 "data_offset": 2048, 00:18:15.270 "data_size": 63488 00:18:15.270 } 00:18:15.270 ] 00:18:15.270 }' 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.270 14:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.528 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:15.528 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.528 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.528 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.528 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.790 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:15.790 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:15.790 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.790 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.790 [2024-11-04 14:50:45.482099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.790 BaseBdev1 00:18:15.790 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.791 [ 00:18:15.791 { 00:18:15.791 "name": "BaseBdev1", 00:18:15.791 "aliases": [ 00:18:15.791 "d0dd7976-1087-44fa-84ec-ee15c41f3a8b" 00:18:15.791 ], 00:18:15.791 "product_name": "Malloc disk", 00:18:15.791 "block_size": 512, 00:18:15.791 "num_blocks": 65536, 00:18:15.791 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:15.791 "assigned_rate_limits": { 00:18:15.791 "rw_ios_per_sec": 0, 00:18:15.791 "rw_mbytes_per_sec": 0, 00:18:15.791 "r_mbytes_per_sec": 0, 00:18:15.791 "w_mbytes_per_sec": 0 00:18:15.791 }, 00:18:15.791 "claimed": true, 00:18:15.791 "claim_type": "exclusive_write", 00:18:15.791 "zoned": false, 00:18:15.791 "supported_io_types": { 00:18:15.791 "read": true, 00:18:15.791 "write": true, 00:18:15.791 "unmap": true, 00:18:15.791 "flush": true, 00:18:15.791 "reset": true, 00:18:15.791 "nvme_admin": false, 00:18:15.791 "nvme_io": false, 00:18:15.791 "nvme_io_md": false, 00:18:15.791 "write_zeroes": true, 00:18:15.791 "zcopy": true, 00:18:15.791 "get_zone_info": false, 00:18:15.791 "zone_management": false, 00:18:15.791 "zone_append": false, 00:18:15.791 "compare": false, 00:18:15.791 "compare_and_write": false, 00:18:15.791 "abort": true, 00:18:15.791 "seek_hole": false, 00:18:15.791 "seek_data": false, 00:18:15.791 "copy": true, 00:18:15.791 "nvme_iov_md": false 00:18:15.791 }, 00:18:15.791 "memory_domains": [ 00:18:15.791 { 00:18:15.791 "dma_device_id": "system", 00:18:15.791 "dma_device_type": 1 00:18:15.791 }, 00:18:15.791 { 00:18:15.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.791 "dma_device_type": 2 00:18:15.791 } 00:18:15.791 ], 00:18:15.791 "driver_specific": {} 00:18:15.791 } 00:18:15.791 ] 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.791 "name": "Existed_Raid", 00:18:15.791 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:15.791 "strip_size_kb": 64, 00:18:15.791 "state": "configuring", 00:18:15.791 "raid_level": "concat", 00:18:15.791 "superblock": true, 00:18:15.791 "num_base_bdevs": 4, 00:18:15.791 "num_base_bdevs_discovered": 3, 00:18:15.791 "num_base_bdevs_operational": 4, 00:18:15.791 "base_bdevs_list": [ 00:18:15.791 { 00:18:15.791 "name": "BaseBdev1", 00:18:15.791 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:15.791 "is_configured": true, 00:18:15.791 "data_offset": 2048, 00:18:15.791 "data_size": 63488 00:18:15.791 }, 00:18:15.791 { 00:18:15.791 "name": null, 00:18:15.791 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:15.791 "is_configured": false, 00:18:15.791 "data_offset": 0, 00:18:15.791 "data_size": 63488 00:18:15.791 }, 00:18:15.791 { 00:18:15.791 "name": "BaseBdev3", 00:18:15.791 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:15.791 "is_configured": true, 00:18:15.791 "data_offset": 2048, 00:18:15.791 "data_size": 63488 00:18:15.791 }, 00:18:15.791 { 00:18:15.791 "name": "BaseBdev4", 00:18:15.791 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:15.791 "is_configured": true, 00:18:15.791 "data_offset": 2048, 00:18:15.791 "data_size": 63488 00:18:15.791 } 00:18:15.791 ] 00:18:15.791 }' 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.791 14:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.370 [2024-11-04 14:50:46.082469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.370 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.370 "name": "Existed_Raid", 00:18:16.370 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:16.370 "strip_size_kb": 64, 00:18:16.370 "state": "configuring", 00:18:16.370 "raid_level": "concat", 00:18:16.370 "superblock": true, 00:18:16.370 "num_base_bdevs": 4, 00:18:16.370 "num_base_bdevs_discovered": 2, 00:18:16.370 "num_base_bdevs_operational": 4, 00:18:16.370 "base_bdevs_list": [ 00:18:16.370 { 00:18:16.370 "name": "BaseBdev1", 00:18:16.370 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:16.370 "is_configured": true, 00:18:16.370 "data_offset": 2048, 00:18:16.370 "data_size": 63488 00:18:16.370 }, 00:18:16.370 { 00:18:16.370 "name": null, 00:18:16.370 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:16.370 "is_configured": false, 00:18:16.370 "data_offset": 0, 00:18:16.370 "data_size": 63488 00:18:16.370 }, 00:18:16.370 { 00:18:16.371 "name": null, 00:18:16.371 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:16.371 "is_configured": false, 00:18:16.371 "data_offset": 0, 00:18:16.371 "data_size": 63488 00:18:16.371 }, 00:18:16.371 { 00:18:16.371 "name": "BaseBdev4", 00:18:16.371 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:16.371 "is_configured": true, 00:18:16.371 "data_offset": 2048, 00:18:16.371 "data_size": 63488 00:18:16.371 } 00:18:16.371 ] 00:18:16.371 }' 00:18:16.371 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.371 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.937 [2024-11-04 14:50:46.658657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.937 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.937 "name": "Existed_Raid", 00:18:16.937 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:16.937 "strip_size_kb": 64, 00:18:16.937 "state": "configuring", 00:18:16.937 "raid_level": "concat", 00:18:16.937 "superblock": true, 00:18:16.937 "num_base_bdevs": 4, 00:18:16.937 "num_base_bdevs_discovered": 3, 00:18:16.937 "num_base_bdevs_operational": 4, 00:18:16.937 "base_bdevs_list": [ 00:18:16.937 { 00:18:16.937 "name": "BaseBdev1", 00:18:16.937 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:16.937 "is_configured": true, 00:18:16.937 "data_offset": 2048, 00:18:16.937 "data_size": 63488 00:18:16.937 }, 00:18:16.937 { 00:18:16.937 "name": null, 00:18:16.937 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:16.937 "is_configured": false, 00:18:16.937 "data_offset": 0, 00:18:16.937 "data_size": 63488 00:18:16.937 }, 00:18:16.937 { 00:18:16.937 "name": "BaseBdev3", 00:18:16.937 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:16.937 "is_configured": true, 00:18:16.937 "data_offset": 2048, 00:18:16.937 "data_size": 63488 00:18:16.937 }, 00:18:16.937 { 00:18:16.937 "name": "BaseBdev4", 00:18:16.937 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:16.937 "is_configured": true, 00:18:16.937 "data_offset": 2048, 00:18:16.937 "data_size": 63488 00:18:16.937 } 00:18:16.937 ] 00:18:16.937 }' 00:18:16.938 14:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.938 14:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.504 [2024-11-04 14:50:47.235063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.504 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.505 "name": "Existed_Raid", 00:18:17.505 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:17.505 "strip_size_kb": 64, 00:18:17.505 "state": "configuring", 00:18:17.505 "raid_level": "concat", 00:18:17.505 "superblock": true, 00:18:17.505 "num_base_bdevs": 4, 00:18:17.505 "num_base_bdevs_discovered": 2, 00:18:17.505 "num_base_bdevs_operational": 4, 00:18:17.505 "base_bdevs_list": [ 00:18:17.505 { 00:18:17.505 "name": null, 00:18:17.505 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:17.505 "is_configured": false, 00:18:17.505 "data_offset": 0, 00:18:17.505 "data_size": 63488 00:18:17.505 }, 00:18:17.505 { 00:18:17.505 "name": null, 00:18:17.505 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:17.505 "is_configured": false, 00:18:17.505 "data_offset": 0, 00:18:17.505 "data_size": 63488 00:18:17.505 }, 00:18:17.505 { 00:18:17.505 "name": "BaseBdev3", 00:18:17.505 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:17.505 "is_configured": true, 00:18:17.505 "data_offset": 2048, 00:18:17.505 "data_size": 63488 00:18:17.505 }, 00:18:17.505 { 00:18:17.505 "name": "BaseBdev4", 00:18:17.505 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:17.505 "is_configured": true, 00:18:17.505 "data_offset": 2048, 00:18:17.505 "data_size": 63488 00:18:17.505 } 00:18:17.505 ] 00:18:17.505 }' 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.505 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.070 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.070 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.070 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.071 [2024-11-04 14:50:47.914704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.071 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.329 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.329 "name": "Existed_Raid", 00:18:18.329 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:18.329 "strip_size_kb": 64, 00:18:18.329 "state": "configuring", 00:18:18.329 "raid_level": "concat", 00:18:18.329 "superblock": true, 00:18:18.329 "num_base_bdevs": 4, 00:18:18.329 "num_base_bdevs_discovered": 3, 00:18:18.329 "num_base_bdevs_operational": 4, 00:18:18.329 "base_bdevs_list": [ 00:18:18.329 { 00:18:18.329 "name": null, 00:18:18.329 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:18.329 "is_configured": false, 00:18:18.329 "data_offset": 0, 00:18:18.329 "data_size": 63488 00:18:18.329 }, 00:18:18.329 { 00:18:18.329 "name": "BaseBdev2", 00:18:18.329 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:18.329 "is_configured": true, 00:18:18.329 "data_offset": 2048, 00:18:18.329 "data_size": 63488 00:18:18.329 }, 00:18:18.329 { 00:18:18.329 "name": "BaseBdev3", 00:18:18.329 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:18.329 "is_configured": true, 00:18:18.329 "data_offset": 2048, 00:18:18.329 "data_size": 63488 00:18:18.329 }, 00:18:18.329 { 00:18:18.329 "name": "BaseBdev4", 00:18:18.329 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:18.329 "is_configured": true, 00:18:18.329 "data_offset": 2048, 00:18:18.329 "data_size": 63488 00:18:18.329 } 00:18:18.329 ] 00:18:18.329 }' 00:18:18.329 14:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.329 14:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.586 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:18.586 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.586 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.586 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.586 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d0dd7976-1087-44fa-84ec-ee15c41f3a8b 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.845 [2024-11-04 14:50:48.581941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:18.845 NewBaseBdev 00:18:18.845 [2024-11-04 14:50:48.582606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:18.845 [2024-11-04 14:50:48.582632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:18.845 [2024-11-04 14:50:48.582993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:18.845 [2024-11-04 14:50:48.583226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:18.845 [2024-11-04 14:50:48.583250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:18.845 [2024-11-04 14:50:48.583440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.845 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.845 [ 00:18:18.845 { 00:18:18.845 "name": "NewBaseBdev", 00:18:18.845 "aliases": [ 00:18:18.845 "d0dd7976-1087-44fa-84ec-ee15c41f3a8b" 00:18:18.845 ], 00:18:18.845 "product_name": "Malloc disk", 00:18:18.845 "block_size": 512, 00:18:18.845 "num_blocks": 65536, 00:18:18.845 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:18.845 "assigned_rate_limits": { 00:18:18.845 "rw_ios_per_sec": 0, 00:18:18.845 "rw_mbytes_per_sec": 0, 00:18:18.845 "r_mbytes_per_sec": 0, 00:18:18.845 "w_mbytes_per_sec": 0 00:18:18.845 }, 00:18:18.845 "claimed": true, 00:18:18.845 "claim_type": "exclusive_write", 00:18:18.845 "zoned": false, 00:18:18.846 "supported_io_types": { 00:18:18.846 "read": true, 00:18:18.846 "write": true, 00:18:18.846 "unmap": true, 00:18:18.846 "flush": true, 00:18:18.846 "reset": true, 00:18:18.846 "nvme_admin": false, 00:18:18.846 "nvme_io": false, 00:18:18.846 "nvme_io_md": false, 00:18:18.846 "write_zeroes": true, 00:18:18.846 "zcopy": true, 00:18:18.846 "get_zone_info": false, 00:18:18.846 "zone_management": false, 00:18:18.846 "zone_append": false, 00:18:18.846 "compare": false, 00:18:18.846 "compare_and_write": false, 00:18:18.846 "abort": true, 00:18:18.846 "seek_hole": false, 00:18:18.846 "seek_data": false, 00:18:18.846 "copy": true, 00:18:18.846 "nvme_iov_md": false 00:18:18.846 }, 00:18:18.846 "memory_domains": [ 00:18:18.846 { 00:18:18.846 "dma_device_id": "system", 00:18:18.846 "dma_device_type": 1 00:18:18.846 }, 00:18:18.846 { 00:18:18.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.846 "dma_device_type": 2 00:18:18.846 } 00:18:18.846 ], 00:18:18.846 "driver_specific": {} 00:18:18.846 } 00:18:18.846 ] 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.846 "name": "Existed_Raid", 00:18:18.846 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:18.846 "strip_size_kb": 64, 00:18:18.846 "state": "online", 00:18:18.846 "raid_level": "concat", 00:18:18.846 "superblock": true, 00:18:18.846 "num_base_bdevs": 4, 00:18:18.846 "num_base_bdevs_discovered": 4, 00:18:18.846 "num_base_bdevs_operational": 4, 00:18:18.846 "base_bdevs_list": [ 00:18:18.846 { 00:18:18.846 "name": "NewBaseBdev", 00:18:18.846 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:18.846 "is_configured": true, 00:18:18.846 "data_offset": 2048, 00:18:18.846 "data_size": 63488 00:18:18.846 }, 00:18:18.846 { 00:18:18.846 "name": "BaseBdev2", 00:18:18.846 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:18.846 "is_configured": true, 00:18:18.846 "data_offset": 2048, 00:18:18.846 "data_size": 63488 00:18:18.846 }, 00:18:18.846 { 00:18:18.846 "name": "BaseBdev3", 00:18:18.846 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:18.846 "is_configured": true, 00:18:18.846 "data_offset": 2048, 00:18:18.846 "data_size": 63488 00:18:18.846 }, 00:18:18.846 { 00:18:18.846 "name": "BaseBdev4", 00:18:18.846 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:18.846 "is_configured": true, 00:18:18.846 "data_offset": 2048, 00:18:18.846 "data_size": 63488 00:18:18.846 } 00:18:18.846 ] 00:18:18.846 }' 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.846 14:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.413 [2024-11-04 14:50:49.154733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.413 "name": "Existed_Raid", 00:18:19.413 "aliases": [ 00:18:19.413 "ea9c1d5b-c494-426c-9e53-5f0f432ee73b" 00:18:19.413 ], 00:18:19.413 "product_name": "Raid Volume", 00:18:19.413 "block_size": 512, 00:18:19.413 "num_blocks": 253952, 00:18:19.413 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:19.413 "assigned_rate_limits": { 00:18:19.413 "rw_ios_per_sec": 0, 00:18:19.413 "rw_mbytes_per_sec": 0, 00:18:19.413 "r_mbytes_per_sec": 0, 00:18:19.413 "w_mbytes_per_sec": 0 00:18:19.413 }, 00:18:19.413 "claimed": false, 00:18:19.413 "zoned": false, 00:18:19.413 "supported_io_types": { 00:18:19.413 "read": true, 00:18:19.413 "write": true, 00:18:19.413 "unmap": true, 00:18:19.413 "flush": true, 00:18:19.413 "reset": true, 00:18:19.413 "nvme_admin": false, 00:18:19.413 "nvme_io": false, 00:18:19.413 "nvme_io_md": false, 00:18:19.413 "write_zeroes": true, 00:18:19.413 "zcopy": false, 00:18:19.413 "get_zone_info": false, 00:18:19.413 "zone_management": false, 00:18:19.413 "zone_append": false, 00:18:19.413 "compare": false, 00:18:19.413 "compare_and_write": false, 00:18:19.413 "abort": false, 00:18:19.413 "seek_hole": false, 00:18:19.413 "seek_data": false, 00:18:19.413 "copy": false, 00:18:19.413 "nvme_iov_md": false 00:18:19.413 }, 00:18:19.413 "memory_domains": [ 00:18:19.413 { 00:18:19.413 "dma_device_id": "system", 00:18:19.413 "dma_device_type": 1 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.413 "dma_device_type": 2 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "system", 00:18:19.413 "dma_device_type": 1 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.413 "dma_device_type": 2 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "system", 00:18:19.413 "dma_device_type": 1 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.413 "dma_device_type": 2 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "system", 00:18:19.413 "dma_device_type": 1 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.413 "dma_device_type": 2 00:18:19.413 } 00:18:19.413 ], 00:18:19.413 "driver_specific": { 00:18:19.413 "raid": { 00:18:19.413 "uuid": "ea9c1d5b-c494-426c-9e53-5f0f432ee73b", 00:18:19.413 "strip_size_kb": 64, 00:18:19.413 "state": "online", 00:18:19.413 "raid_level": "concat", 00:18:19.413 "superblock": true, 00:18:19.413 "num_base_bdevs": 4, 00:18:19.413 "num_base_bdevs_discovered": 4, 00:18:19.413 "num_base_bdevs_operational": 4, 00:18:19.413 "base_bdevs_list": [ 00:18:19.413 { 00:18:19.413 "name": "NewBaseBdev", 00:18:19.413 "uuid": "d0dd7976-1087-44fa-84ec-ee15c41f3a8b", 00:18:19.413 "is_configured": true, 00:18:19.413 "data_offset": 2048, 00:18:19.413 "data_size": 63488 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "name": "BaseBdev2", 00:18:19.413 "uuid": "ad48ad9c-e006-4940-bfb5-23cca0d48acd", 00:18:19.413 "is_configured": true, 00:18:19.413 "data_offset": 2048, 00:18:19.413 "data_size": 63488 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "name": "BaseBdev3", 00:18:19.413 "uuid": "0b97c40e-c8ca-4682-b3c8-cdf212a8bdf0", 00:18:19.413 "is_configured": true, 00:18:19.413 "data_offset": 2048, 00:18:19.413 "data_size": 63488 00:18:19.413 }, 00:18:19.413 { 00:18:19.413 "name": "BaseBdev4", 00:18:19.413 "uuid": "9914bae1-2d79-40f4-92f6-4414b7f2025f", 00:18:19.413 "is_configured": true, 00:18:19.413 "data_offset": 2048, 00:18:19.413 "data_size": 63488 00:18:19.413 } 00:18:19.413 ] 00:18:19.413 } 00:18:19.413 } 00:18:19.413 }' 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:19.413 BaseBdev2 00:18:19.413 BaseBdev3 00:18:19.413 BaseBdev4' 00:18:19.413 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 [2024-11-04 14:50:49.546346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.672 [2024-11-04 14:50:49.546509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.672 [2024-11-04 14:50:49.546742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.672 [2024-11-04 14:50:49.546976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.672 [2024-11-04 14:50:49.547096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72220 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72220 ']' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72220 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.672 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72220 00:18:19.931 killing process with pid 72220 00:18:19.931 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:19.931 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:19.931 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72220' 00:18:19.931 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72220 00:18:19.931 [2024-11-04 14:50:49.584333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.931 14:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72220 00:18:20.190 [2024-11-04 14:50:49.979482] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:21.567 14:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:21.567 00:18:21.567 real 0m13.194s 00:18:21.567 user 0m21.567s 00:18:21.567 sys 0m1.912s 00:18:21.567 ************************************ 00:18:21.567 END TEST raid_state_function_test_sb 00:18:21.567 ************************************ 00:18:21.567 14:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:21.567 14:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.567 14:50:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:21.567 14:50:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:21.567 14:50:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:21.567 14:50:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.567 ************************************ 00:18:21.567 START TEST raid_superblock_test 00:18:21.567 ************************************ 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:18:21.567 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72907 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72907 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72907 ']' 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.568 14:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.568 [2024-11-04 14:50:51.363953] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:18:21.568 [2024-11-04 14:50:51.364931] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72907 ] 00:18:21.826 [2024-11-04 14:50:51.554975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.084 [2024-11-04 14:50:51.718670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.084 [2024-11-04 14:50:51.959432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.084 [2024-11-04 14:50:51.959526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 malloc1 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 [2024-11-04 14:50:52.419682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.651 [2024-11-04 14:50:52.420110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.651 [2024-11-04 14:50:52.420160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:22.651 [2024-11-04 14:50:52.420179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.651 [2024-11-04 14:50:52.423460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.651 [2024-11-04 14:50:52.423505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.652 pt1 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.652 malloc2 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.652 [2024-11-04 14:50:52.479481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.652 [2024-11-04 14:50:52.479579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.652 [2024-11-04 14:50:52.479613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:22.652 [2024-11-04 14:50:52.479627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.652 [2024-11-04 14:50:52.482720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.652 [2024-11-04 14:50:52.482761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.652 pt2 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.652 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.910 malloc3 00:18:22.910 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.910 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.911 [2024-11-04 14:50:52.555495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:22.911 [2024-11-04 14:50:52.555624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.911 [2024-11-04 14:50:52.555661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:22.911 [2024-11-04 14:50:52.555693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.911 [2024-11-04 14:50:52.559101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.911 [2024-11-04 14:50:52.559161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:22.911 pt3 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.911 malloc4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.911 [2024-11-04 14:50:52.615976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:22.911 [2024-11-04 14:50:52.616066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.911 [2024-11-04 14:50:52.616098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:22.911 [2024-11-04 14:50:52.616113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.911 [2024-11-04 14:50:52.619358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.911 [2024-11-04 14:50:52.619400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:22.911 pt4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.911 [2024-11-04 14:50:52.624070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.911 [2024-11-04 14:50:52.627202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.911 [2024-11-04 14:50:52.627461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:22.911 [2024-11-04 14:50:52.627620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:22.911 [2024-11-04 14:50:52.627960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:22.911 [2024-11-04 14:50:52.628079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:22.911 [2024-11-04 14:50:52.628507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:22.911 [2024-11-04 14:50:52.628776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:22.911 [2024-11-04 14:50:52.628797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:22.911 [2024-11-04 14:50:52.629053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.911 "name": "raid_bdev1", 00:18:22.911 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:22.911 "strip_size_kb": 64, 00:18:22.911 "state": "online", 00:18:22.911 "raid_level": "concat", 00:18:22.911 "superblock": true, 00:18:22.911 "num_base_bdevs": 4, 00:18:22.911 "num_base_bdevs_discovered": 4, 00:18:22.911 "num_base_bdevs_operational": 4, 00:18:22.911 "base_bdevs_list": [ 00:18:22.911 { 00:18:22.911 "name": "pt1", 00:18:22.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.911 "is_configured": true, 00:18:22.911 "data_offset": 2048, 00:18:22.911 "data_size": 63488 00:18:22.911 }, 00:18:22.911 { 00:18:22.911 "name": "pt2", 00:18:22.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.911 "is_configured": true, 00:18:22.911 "data_offset": 2048, 00:18:22.911 "data_size": 63488 00:18:22.911 }, 00:18:22.911 { 00:18:22.911 "name": "pt3", 00:18:22.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.911 "is_configured": true, 00:18:22.911 "data_offset": 2048, 00:18:22.911 "data_size": 63488 00:18:22.911 }, 00:18:22.911 { 00:18:22.911 "name": "pt4", 00:18:22.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.911 "is_configured": true, 00:18:22.911 "data_offset": 2048, 00:18:22.911 "data_size": 63488 00:18:22.911 } 00:18:22.911 ] 00:18:22.911 }' 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.911 14:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.477 [2024-11-04 14:50:53.128717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.477 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.477 "name": "raid_bdev1", 00:18:23.477 "aliases": [ 00:18:23.477 "19559c98-899b-45b8-a3a1-13b1b718a3c9" 00:18:23.477 ], 00:18:23.477 "product_name": "Raid Volume", 00:18:23.477 "block_size": 512, 00:18:23.477 "num_blocks": 253952, 00:18:23.477 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:23.477 "assigned_rate_limits": { 00:18:23.477 "rw_ios_per_sec": 0, 00:18:23.478 "rw_mbytes_per_sec": 0, 00:18:23.478 "r_mbytes_per_sec": 0, 00:18:23.478 "w_mbytes_per_sec": 0 00:18:23.478 }, 00:18:23.478 "claimed": false, 00:18:23.478 "zoned": false, 00:18:23.478 "supported_io_types": { 00:18:23.478 "read": true, 00:18:23.478 "write": true, 00:18:23.478 "unmap": true, 00:18:23.478 "flush": true, 00:18:23.478 "reset": true, 00:18:23.478 "nvme_admin": false, 00:18:23.478 "nvme_io": false, 00:18:23.478 "nvme_io_md": false, 00:18:23.478 "write_zeroes": true, 00:18:23.478 "zcopy": false, 00:18:23.478 "get_zone_info": false, 00:18:23.478 "zone_management": false, 00:18:23.478 "zone_append": false, 00:18:23.478 "compare": false, 00:18:23.478 "compare_and_write": false, 00:18:23.478 "abort": false, 00:18:23.478 "seek_hole": false, 00:18:23.478 "seek_data": false, 00:18:23.478 "copy": false, 00:18:23.478 "nvme_iov_md": false 00:18:23.478 }, 00:18:23.478 "memory_domains": [ 00:18:23.478 { 00:18:23.478 "dma_device_id": "system", 00:18:23.478 "dma_device_type": 1 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.478 "dma_device_type": 2 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "system", 00:18:23.478 "dma_device_type": 1 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.478 "dma_device_type": 2 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "system", 00:18:23.478 "dma_device_type": 1 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.478 "dma_device_type": 2 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "system", 00:18:23.478 "dma_device_type": 1 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.478 "dma_device_type": 2 00:18:23.478 } 00:18:23.478 ], 00:18:23.478 "driver_specific": { 00:18:23.478 "raid": { 00:18:23.478 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:23.478 "strip_size_kb": 64, 00:18:23.478 "state": "online", 00:18:23.478 "raid_level": "concat", 00:18:23.478 "superblock": true, 00:18:23.478 "num_base_bdevs": 4, 00:18:23.478 "num_base_bdevs_discovered": 4, 00:18:23.478 "num_base_bdevs_operational": 4, 00:18:23.478 "base_bdevs_list": [ 00:18:23.478 { 00:18:23.478 "name": "pt1", 00:18:23.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.478 "is_configured": true, 00:18:23.478 "data_offset": 2048, 00:18:23.478 "data_size": 63488 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "name": "pt2", 00:18:23.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.478 "is_configured": true, 00:18:23.478 "data_offset": 2048, 00:18:23.478 "data_size": 63488 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "name": "pt3", 00:18:23.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.478 "is_configured": true, 00:18:23.478 "data_offset": 2048, 00:18:23.478 "data_size": 63488 00:18:23.478 }, 00:18:23.478 { 00:18:23.478 "name": "pt4", 00:18:23.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.478 "is_configured": true, 00:18:23.478 "data_offset": 2048, 00:18:23.478 "data_size": 63488 00:18:23.478 } 00:18:23.478 ] 00:18:23.478 } 00:18:23.478 } 00:18:23.478 }' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:23.478 pt2 00:18:23.478 pt3 00:18:23.478 pt4' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.478 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.777 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 [2024-11-04 14:50:53.504863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=19559c98-899b-45b8-a3a1-13b1b718a3c9 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 19559c98-899b-45b8-a3a1-13b1b718a3c9 ']' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 [2024-11-04 14:50:53.544447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.778 [2024-11-04 14:50:53.544821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.778 [2024-11-04 14:50:53.544993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.778 [2024-11-04 14:50:53.545097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.778 [2024-11-04 14:50:53.545123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:23.778 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.060 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.061 [2024-11-04 14:50:53.708435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:24.061 [2024-11-04 14:50:53.711517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:24.061 [2024-11-04 14:50:53.711615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:24.061 [2024-11-04 14:50:53.711683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:24.061 [2024-11-04 14:50:53.711755] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:24.061 [2024-11-04 14:50:53.711824] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:24.061 [2024-11-04 14:50:53.711856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:24.061 [2024-11-04 14:50:53.711886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:24.061 [2024-11-04 14:50:53.711907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.061 [2024-11-04 14:50:53.711938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:24.061 request: 00:18:24.061 { 00:18:24.061 "name": "raid_bdev1", 00:18:24.061 "raid_level": "concat", 00:18:24.061 "base_bdevs": [ 00:18:24.061 "malloc1", 00:18:24.061 "malloc2", 00:18:24.061 "malloc3", 00:18:24.061 "malloc4" 00:18:24.061 ], 00:18:24.061 "strip_size_kb": 64, 00:18:24.061 "superblock": false, 00:18:24.061 "method": "bdev_raid_create", 00:18:24.061 "req_id": 1 00:18:24.061 } 00:18:24.061 Got JSON-RPC error response 00:18:24.061 response: 00:18:24.061 { 00:18:24.061 "code": -17, 00:18:24.061 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:24.061 } 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.061 [2024-11-04 14:50:53.780505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.061 [2024-11-04 14:50:53.780745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.061 [2024-11-04 14:50:53.780812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:24.061 [2024-11-04 14:50:53.780920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.061 [2024-11-04 14:50:53.784326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.061 [2024-11-04 14:50:53.784519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.061 [2024-11-04 14:50:53.784759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:24.061 [2024-11-04 14:50:53.784953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.061 pt1 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.061 "name": "raid_bdev1", 00:18:24.061 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:24.061 "strip_size_kb": 64, 00:18:24.061 "state": "configuring", 00:18:24.061 "raid_level": "concat", 00:18:24.061 "superblock": true, 00:18:24.061 "num_base_bdevs": 4, 00:18:24.061 "num_base_bdevs_discovered": 1, 00:18:24.061 "num_base_bdevs_operational": 4, 00:18:24.061 "base_bdevs_list": [ 00:18:24.061 { 00:18:24.061 "name": "pt1", 00:18:24.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.061 "is_configured": true, 00:18:24.061 "data_offset": 2048, 00:18:24.061 "data_size": 63488 00:18:24.061 }, 00:18:24.061 { 00:18:24.061 "name": null, 00:18:24.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.061 "is_configured": false, 00:18:24.061 "data_offset": 2048, 00:18:24.061 "data_size": 63488 00:18:24.061 }, 00:18:24.061 { 00:18:24.061 "name": null, 00:18:24.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:24.061 "is_configured": false, 00:18:24.061 "data_offset": 2048, 00:18:24.061 "data_size": 63488 00:18:24.061 }, 00:18:24.061 { 00:18:24.061 "name": null, 00:18:24.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:24.061 "is_configured": false, 00:18:24.061 "data_offset": 2048, 00:18:24.061 "data_size": 63488 00:18:24.061 } 00:18:24.061 ] 00:18:24.061 }' 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.061 14:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 [2024-11-04 14:50:54.341039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:24.629 [2024-11-04 14:50:54.341446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.629 [2024-11-04 14:50:54.341493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:24.629 [2024-11-04 14:50:54.341516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.629 [2024-11-04 14:50:54.342305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.629 [2024-11-04 14:50:54.342360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:24.629 [2024-11-04 14:50:54.342478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:24.629 [2024-11-04 14:50:54.342526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.629 pt2 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 [2024-11-04 14:50:54.349039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.629 "name": "raid_bdev1", 00:18:24.629 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:24.629 "strip_size_kb": 64, 00:18:24.629 "state": "configuring", 00:18:24.629 "raid_level": "concat", 00:18:24.629 "superblock": true, 00:18:24.629 "num_base_bdevs": 4, 00:18:24.629 "num_base_bdevs_discovered": 1, 00:18:24.629 "num_base_bdevs_operational": 4, 00:18:24.629 "base_bdevs_list": [ 00:18:24.629 { 00:18:24.629 "name": "pt1", 00:18:24.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.629 "is_configured": true, 00:18:24.629 "data_offset": 2048, 00:18:24.629 "data_size": 63488 00:18:24.629 }, 00:18:24.629 { 00:18:24.629 "name": null, 00:18:24.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.629 "is_configured": false, 00:18:24.629 "data_offset": 0, 00:18:24.629 "data_size": 63488 00:18:24.629 }, 00:18:24.629 { 00:18:24.629 "name": null, 00:18:24.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:24.629 "is_configured": false, 00:18:24.629 "data_offset": 2048, 00:18:24.629 "data_size": 63488 00:18:24.629 }, 00:18:24.629 { 00:18:24.629 "name": null, 00:18:24.629 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:24.629 "is_configured": false, 00:18:24.629 "data_offset": 2048, 00:18:24.629 "data_size": 63488 00:18:24.629 } 00:18:24.629 ] 00:18:24.629 }' 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.629 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.196 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:25.196 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.196 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.196 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.196 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.196 [2024-11-04 14:50:54.889203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.196 [2024-11-04 14:50:54.889526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.196 [2024-11-04 14:50:54.889717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:25.197 [2024-11-04 14:50:54.889746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.197 [2024-11-04 14:50:54.890452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.197 [2024-11-04 14:50:54.890490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.197 [2024-11-04 14:50:54.890614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:25.197 [2024-11-04 14:50:54.890657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.197 pt2 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.197 [2024-11-04 14:50:54.897137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:25.197 [2024-11-04 14:50:54.897379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.197 [2024-11-04 14:50:54.897458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:25.197 [2024-11-04 14:50:54.897683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.197 [2024-11-04 14:50:54.898381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.197 [2024-11-04 14:50:54.898562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:25.197 [2024-11-04 14:50:54.898779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:25.197 [2024-11-04 14:50:54.898938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:25.197 pt3 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.197 [2024-11-04 14:50:54.905117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:25.197 [2024-11-04 14:50:54.905311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.197 [2024-11-04 14:50:54.905400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:25.197 [2024-11-04 14:50:54.905654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.197 [2024-11-04 14:50:54.906130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.197 [2024-11-04 14:50:54.906168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:25.197 [2024-11-04 14:50:54.906300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:25.197 [2024-11-04 14:50:54.906330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:25.197 [2024-11-04 14:50:54.906491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:25.197 [2024-11-04 14:50:54.906507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:25.197 [2024-11-04 14:50:54.906810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:25.197 [2024-11-04 14:50:54.907052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:25.197 [2024-11-04 14:50:54.907076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:25.197 [2024-11-04 14:50:54.907237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.197 pt4 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.197 "name": "raid_bdev1", 00:18:25.197 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:25.197 "strip_size_kb": 64, 00:18:25.197 "state": "online", 00:18:25.197 "raid_level": "concat", 00:18:25.197 "superblock": true, 00:18:25.197 "num_base_bdevs": 4, 00:18:25.197 "num_base_bdevs_discovered": 4, 00:18:25.197 "num_base_bdevs_operational": 4, 00:18:25.197 "base_bdevs_list": [ 00:18:25.197 { 00:18:25.197 "name": "pt1", 00:18:25.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.197 "is_configured": true, 00:18:25.197 "data_offset": 2048, 00:18:25.197 "data_size": 63488 00:18:25.197 }, 00:18:25.197 { 00:18:25.197 "name": "pt2", 00:18:25.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.197 "is_configured": true, 00:18:25.197 "data_offset": 2048, 00:18:25.197 "data_size": 63488 00:18:25.197 }, 00:18:25.197 { 00:18:25.197 "name": "pt3", 00:18:25.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:25.197 "is_configured": true, 00:18:25.197 "data_offset": 2048, 00:18:25.197 "data_size": 63488 00:18:25.197 }, 00:18:25.197 { 00:18:25.197 "name": "pt4", 00:18:25.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:25.197 "is_configured": true, 00:18:25.197 "data_offset": 2048, 00:18:25.197 "data_size": 63488 00:18:25.197 } 00:18:25.197 ] 00:18:25.197 }' 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.197 14:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.764 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:25.765 [2024-11-04 14:50:55.425853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.765 "name": "raid_bdev1", 00:18:25.765 "aliases": [ 00:18:25.765 "19559c98-899b-45b8-a3a1-13b1b718a3c9" 00:18:25.765 ], 00:18:25.765 "product_name": "Raid Volume", 00:18:25.765 "block_size": 512, 00:18:25.765 "num_blocks": 253952, 00:18:25.765 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:25.765 "assigned_rate_limits": { 00:18:25.765 "rw_ios_per_sec": 0, 00:18:25.765 "rw_mbytes_per_sec": 0, 00:18:25.765 "r_mbytes_per_sec": 0, 00:18:25.765 "w_mbytes_per_sec": 0 00:18:25.765 }, 00:18:25.765 "claimed": false, 00:18:25.765 "zoned": false, 00:18:25.765 "supported_io_types": { 00:18:25.765 "read": true, 00:18:25.765 "write": true, 00:18:25.765 "unmap": true, 00:18:25.765 "flush": true, 00:18:25.765 "reset": true, 00:18:25.765 "nvme_admin": false, 00:18:25.765 "nvme_io": false, 00:18:25.765 "nvme_io_md": false, 00:18:25.765 "write_zeroes": true, 00:18:25.765 "zcopy": false, 00:18:25.765 "get_zone_info": false, 00:18:25.765 "zone_management": false, 00:18:25.765 "zone_append": false, 00:18:25.765 "compare": false, 00:18:25.765 "compare_and_write": false, 00:18:25.765 "abort": false, 00:18:25.765 "seek_hole": false, 00:18:25.765 "seek_data": false, 00:18:25.765 "copy": false, 00:18:25.765 "nvme_iov_md": false 00:18:25.765 }, 00:18:25.765 "memory_domains": [ 00:18:25.765 { 00:18:25.765 "dma_device_id": "system", 00:18:25.765 "dma_device_type": 1 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.765 "dma_device_type": 2 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "system", 00:18:25.765 "dma_device_type": 1 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.765 "dma_device_type": 2 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "system", 00:18:25.765 "dma_device_type": 1 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.765 "dma_device_type": 2 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "system", 00:18:25.765 "dma_device_type": 1 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.765 "dma_device_type": 2 00:18:25.765 } 00:18:25.765 ], 00:18:25.765 "driver_specific": { 00:18:25.765 "raid": { 00:18:25.765 "uuid": "19559c98-899b-45b8-a3a1-13b1b718a3c9", 00:18:25.765 "strip_size_kb": 64, 00:18:25.765 "state": "online", 00:18:25.765 "raid_level": "concat", 00:18:25.765 "superblock": true, 00:18:25.765 "num_base_bdevs": 4, 00:18:25.765 "num_base_bdevs_discovered": 4, 00:18:25.765 "num_base_bdevs_operational": 4, 00:18:25.765 "base_bdevs_list": [ 00:18:25.765 { 00:18:25.765 "name": "pt1", 00:18:25.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.765 "is_configured": true, 00:18:25.765 "data_offset": 2048, 00:18:25.765 "data_size": 63488 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "name": "pt2", 00:18:25.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.765 "is_configured": true, 00:18:25.765 "data_offset": 2048, 00:18:25.765 "data_size": 63488 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "name": "pt3", 00:18:25.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:25.765 "is_configured": true, 00:18:25.765 "data_offset": 2048, 00:18:25.765 "data_size": 63488 00:18:25.765 }, 00:18:25.765 { 00:18:25.765 "name": "pt4", 00:18:25.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:25.765 "is_configured": true, 00:18:25.765 "data_offset": 2048, 00:18:25.765 "data_size": 63488 00:18:25.765 } 00:18:25.765 ] 00:18:25.765 } 00:18:25.765 } 00:18:25.765 }' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:25.765 pt2 00:18:25.765 pt3 00:18:25.765 pt4' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.765 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.024 [2024-11-04 14:50:55.797780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 19559c98-899b-45b8-a3a1-13b1b718a3c9 '!=' 19559c98-899b-45b8-a3a1-13b1b718a3c9 ']' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72907 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72907 ']' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72907 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72907 00:18:26.024 killing process with pid 72907 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72907' 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72907 00:18:26.024 [2024-11-04 14:50:55.878458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.024 14:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72907 00:18:26.024 [2024-11-04 14:50:55.878602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.024 [2024-11-04 14:50:55.878739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.024 [2024-11-04 14:50:55.878770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:26.591 [2024-11-04 14:50:56.275159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.967 ************************************ 00:18:27.967 END TEST raid_superblock_test 00:18:27.967 ************************************ 00:18:27.967 14:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:27.967 00:18:27.967 real 0m6.186s 00:18:27.967 user 0m9.099s 00:18:27.967 sys 0m1.005s 00:18:27.967 14:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:27.967 14:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.967 14:50:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:18:27.967 14:50:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:27.967 14:50:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:27.967 14:50:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.967 ************************************ 00:18:27.967 START TEST raid_read_error_test 00:18:27.967 ************************************ 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZC1uudubCk 00:18:27.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73178 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73178 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73178 ']' 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.967 14:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.967 [2024-11-04 14:50:57.662167] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:18:27.967 [2024-11-04 14:50:57.662478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73178 ] 00:18:28.225 [2024-11-04 14:50:57.867772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.225 [2024-11-04 14:50:58.016534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.483 [2024-11-04 14:50:58.247564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.484 [2024-11-04 14:50:58.247640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.741 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.741 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:28.741 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:28.741 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:28.741 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.741 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.999 BaseBdev1_malloc 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.999 true 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.999 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.999 [2024-11-04 14:50:58.653841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:29.000 [2024-11-04 14:50:58.653966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.000 [2024-11-04 14:50:58.654000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:29.000 [2024-11-04 14:50:58.654020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.000 [2024-11-04 14:50:58.657536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.000 [2024-11-04 14:50:58.657625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:29.000 BaseBdev1 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 BaseBdev2_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 true 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 [2024-11-04 14:50:58.719211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:29.000 [2024-11-04 14:50:58.719528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.000 [2024-11-04 14:50:58.719568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:29.000 [2024-11-04 14:50:58.719588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.000 [2024-11-04 14:50:58.723186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.000 [2024-11-04 14:50:58.723425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:29.000 BaseBdev2 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 BaseBdev3_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 true 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 [2024-11-04 14:50:58.799664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:29.000 [2024-11-04 14:50:58.799925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.000 [2024-11-04 14:50:58.800000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:29.000 [2024-11-04 14:50:58.800136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.000 [2024-11-04 14:50:58.803423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.000 [2024-11-04 14:50:58.803489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:29.000 BaseBdev3 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 BaseBdev4_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 true 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 [2024-11-04 14:50:58.864640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:29.000 [2024-11-04 14:50:58.864729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.000 [2024-11-04 14:50:58.864756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:29.000 [2024-11-04 14:50:58.864773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.000 [2024-11-04 14:50:58.868111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.000 [2024-11-04 14:50:58.868188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:29.000 BaseBdev4 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.000 [2024-11-04 14:50:58.872861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.000 [2024-11-04 14:50:58.875849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.000 [2024-11-04 14:50:58.876134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.000 [2024-11-04 14:50:58.876333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.000 [2024-11-04 14:50:58.876682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:29.000 [2024-11-04 14:50:58.876705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:29.000 [2024-11-04 14:50:58.877044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:29.000 [2024-11-04 14:50:58.877307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:29.000 [2024-11-04 14:50:58.877348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:29.000 [2024-11-04 14:50:58.877622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.000 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.258 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.258 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.258 "name": "raid_bdev1", 00:18:29.258 "uuid": "d3c32d42-c59c-4e84-8ed2-64ce5e0cfa14", 00:18:29.258 "strip_size_kb": 64, 00:18:29.258 "state": "online", 00:18:29.258 "raid_level": "concat", 00:18:29.258 "superblock": true, 00:18:29.258 "num_base_bdevs": 4, 00:18:29.258 "num_base_bdevs_discovered": 4, 00:18:29.258 "num_base_bdevs_operational": 4, 00:18:29.258 "base_bdevs_list": [ 00:18:29.258 { 00:18:29.258 "name": "BaseBdev1", 00:18:29.258 "uuid": "5e0b4f97-cd0b-55d2-9677-ca6ab0ab0bd0", 00:18:29.258 "is_configured": true, 00:18:29.258 "data_offset": 2048, 00:18:29.258 "data_size": 63488 00:18:29.258 }, 00:18:29.258 { 00:18:29.258 "name": "BaseBdev2", 00:18:29.258 "uuid": "a29d27b7-461b-5797-b1d8-d70b7d8e96cc", 00:18:29.258 "is_configured": true, 00:18:29.258 "data_offset": 2048, 00:18:29.258 "data_size": 63488 00:18:29.258 }, 00:18:29.258 { 00:18:29.258 "name": "BaseBdev3", 00:18:29.258 "uuid": "576597c4-3960-5e94-a815-73abc0e4e40b", 00:18:29.258 "is_configured": true, 00:18:29.258 "data_offset": 2048, 00:18:29.258 "data_size": 63488 00:18:29.258 }, 00:18:29.258 { 00:18:29.258 "name": "BaseBdev4", 00:18:29.258 "uuid": "f83a595f-1ffb-5198-9297-7eb0acf2880b", 00:18:29.258 "is_configured": true, 00:18:29.258 "data_offset": 2048, 00:18:29.258 "data_size": 63488 00:18:29.258 } 00:18:29.258 ] 00:18:29.258 }' 00:18:29.258 14:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.258 14:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.516 14:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:29.516 14:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:29.775 [2024-11-04 14:50:59.547584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.709 "name": "raid_bdev1", 00:18:30.709 "uuid": "d3c32d42-c59c-4e84-8ed2-64ce5e0cfa14", 00:18:30.709 "strip_size_kb": 64, 00:18:30.709 "state": "online", 00:18:30.709 "raid_level": "concat", 00:18:30.709 "superblock": true, 00:18:30.709 "num_base_bdevs": 4, 00:18:30.709 "num_base_bdevs_discovered": 4, 00:18:30.709 "num_base_bdevs_operational": 4, 00:18:30.709 "base_bdevs_list": [ 00:18:30.709 { 00:18:30.709 "name": "BaseBdev1", 00:18:30.709 "uuid": "5e0b4f97-cd0b-55d2-9677-ca6ab0ab0bd0", 00:18:30.709 "is_configured": true, 00:18:30.709 "data_offset": 2048, 00:18:30.709 "data_size": 63488 00:18:30.709 }, 00:18:30.709 { 00:18:30.709 "name": "BaseBdev2", 00:18:30.709 "uuid": "a29d27b7-461b-5797-b1d8-d70b7d8e96cc", 00:18:30.709 "is_configured": true, 00:18:30.709 "data_offset": 2048, 00:18:30.709 "data_size": 63488 00:18:30.709 }, 00:18:30.709 { 00:18:30.709 "name": "BaseBdev3", 00:18:30.709 "uuid": "576597c4-3960-5e94-a815-73abc0e4e40b", 00:18:30.709 "is_configured": true, 00:18:30.709 "data_offset": 2048, 00:18:30.709 "data_size": 63488 00:18:30.709 }, 00:18:30.709 { 00:18:30.709 "name": "BaseBdev4", 00:18:30.709 "uuid": "f83a595f-1ffb-5198-9297-7eb0acf2880b", 00:18:30.709 "is_configured": true, 00:18:30.709 "data_offset": 2048, 00:18:30.709 "data_size": 63488 00:18:30.709 } 00:18:30.709 ] 00:18:30.709 }' 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.709 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.275 [2024-11-04 14:51:00.958422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.275 [2024-11-04 14:51:00.958469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.275 [2024-11-04 14:51:00.962353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.275 [2024-11-04 14:51:00.962655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.275 [2024-11-04 14:51:00.962887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.275 [2024-11-04 14:51:00.963094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:18:31.275 "results": [ 00:18:31.275 { 00:18:31.275 "job": "raid_bdev1", 00:18:31.275 "core_mask": "0x1", 00:18:31.275 "workload": "randrw", 00:18:31.275 "percentage": 50, 00:18:31.275 "status": "finished", 00:18:31.275 "queue_depth": 1, 00:18:31.275 "io_size": 131072, 00:18:31.275 "runtime": 1.407633, 00:18:31.275 "iops": 9067.704437165085, 00:18:31.275 "mibps": 1133.4630546456356, 00:18:31.275 "io_failed": 1, 00:18:31.275 "io_timeout": 0, 00:18:31.275 "avg_latency_us": 155.79375308905745, 00:18:31.275 "min_latency_us": 37.00363636363636, 00:18:31.275 "max_latency_us": 1854.370909090909 00:18:31.275 } 00:18:31.275 ], 00:18:31.275 "core_count": 1 00:18:31.275 } 00:18:31.275 te offline 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73178 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73178 ']' 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73178 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.275 14:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73178 00:18:31.275 14:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:31.275 14:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:31.275 14:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73178' 00:18:31.275 killing process with pid 73178 00:18:31.275 14:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73178 00:18:31.275 14:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73178 00:18:31.275 [2024-11-04 14:51:01.007534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.533 [2024-11-04 14:51:01.333768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZC1uudubCk 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:32.908 ************************************ 00:18:32.908 END TEST raid_read_error_test 00:18:32.908 ************************************ 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:18:32.908 00:18:32.908 real 0m5.053s 00:18:32.908 user 0m6.049s 00:18:32.908 sys 0m0.762s 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:32.908 14:51:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.908 14:51:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:18:32.908 14:51:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:32.908 14:51:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:32.908 14:51:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.908 ************************************ 00:18:32.908 START TEST raid_write_error_test 00:18:32.908 ************************************ 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.naNlYxHW0r 00:18:32.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73324 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73324 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73324 ']' 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.908 14:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.908 [2024-11-04 14:51:02.734007] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:18:32.908 [2024-11-04 14:51:02.735256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:18:33.166 [2024-11-04 14:51:02.934597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.424 [2024-11-04 14:51:03.113740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.682 [2024-11-04 14:51:03.346454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.682 [2024-11-04 14:51:03.346580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 BaseBdev1_malloc 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.940 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 true 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 [2024-11-04 14:51:03.844034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:34.199 [2024-11-04 14:51:03.844300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.199 [2024-11-04 14:51:03.844357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:34.199 [2024-11-04 14:51:03.844387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.199 [2024-11-04 14:51:03.847810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.199 [2024-11-04 14:51:03.847878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.199 BaseBdev1 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 BaseBdev2_malloc 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 true 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 [2024-11-04 14:51:03.914815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:34.199 [2024-11-04 14:51:03.914929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.199 [2024-11-04 14:51:03.914961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:34.199 [2024-11-04 14:51:03.914979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.199 [2024-11-04 14:51:03.918282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.199 [2024-11-04 14:51:03.918368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.199 BaseBdev2 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 BaseBdev3_malloc 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 true 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 [2024-11-04 14:51:03.996839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:34.199 [2024-11-04 14:51:03.997096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.199 [2024-11-04 14:51:03.997141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:34.199 [2024-11-04 14:51:03.997162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.199 [2024-11-04 14:51:04.000490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.199 [2024-11-04 14:51:04.000557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.199 BaseBdev3 00:18:34.199 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.199 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:34.199 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:34.199 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.199 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.199 BaseBdev4_malloc 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.200 true 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.200 [2024-11-04 14:51:04.063750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:34.200 [2024-11-04 14:51:04.064034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.200 [2024-11-04 14:51:04.064189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:34.200 [2024-11-04 14:51:04.064333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.200 [2024-11-04 14:51:04.067904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.200 [2024-11-04 14:51:04.068126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:34.200 BaseBdev4 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.200 [2024-11-04 14:51:04.076627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.200 [2024-11-04 14:51:04.079569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.200 [2024-11-04 14:51:04.079691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.200 [2024-11-04 14:51:04.079789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.200 [2024-11-04 14:51:04.080161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:34.200 [2024-11-04 14:51:04.080184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:34.200 [2024-11-04 14:51:04.080680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:34.200 [2024-11-04 14:51:04.080949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:34.200 [2024-11-04 14:51:04.080969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:34.200 [2024-11-04 14:51:04.081339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.200 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.458 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.458 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.458 "name": "raid_bdev1", 00:18:34.458 "uuid": "b61e8ba0-b591-441c-acbe-0a93ad92503f", 00:18:34.458 "strip_size_kb": 64, 00:18:34.458 "state": "online", 00:18:34.458 "raid_level": "concat", 00:18:34.458 "superblock": true, 00:18:34.458 "num_base_bdevs": 4, 00:18:34.458 "num_base_bdevs_discovered": 4, 00:18:34.458 "num_base_bdevs_operational": 4, 00:18:34.458 "base_bdevs_list": [ 00:18:34.458 { 00:18:34.458 "name": "BaseBdev1", 00:18:34.458 "uuid": "7ef85c31-9e2f-5834-85e8-1ea858050178", 00:18:34.458 "is_configured": true, 00:18:34.458 "data_offset": 2048, 00:18:34.458 "data_size": 63488 00:18:34.458 }, 00:18:34.458 { 00:18:34.458 "name": "BaseBdev2", 00:18:34.458 "uuid": "740d8000-384e-5f2a-9a7b-e3c08f7225da", 00:18:34.458 "is_configured": true, 00:18:34.458 "data_offset": 2048, 00:18:34.458 "data_size": 63488 00:18:34.458 }, 00:18:34.458 { 00:18:34.458 "name": "BaseBdev3", 00:18:34.458 "uuid": "1daa8d94-1510-5af6-a155-58069759cc65", 00:18:34.458 "is_configured": true, 00:18:34.458 "data_offset": 2048, 00:18:34.458 "data_size": 63488 00:18:34.458 }, 00:18:34.458 { 00:18:34.458 "name": "BaseBdev4", 00:18:34.458 "uuid": "004d79d2-00ab-5ed3-9057-7b0d6afcad35", 00:18:34.458 "is_configured": true, 00:18:34.458 "data_offset": 2048, 00:18:34.458 "data_size": 63488 00:18:34.458 } 00:18:34.458 ] 00:18:34.458 }' 00:18:34.458 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.458 14:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.025 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:35.025 14:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:35.025 [2024-11-04 14:51:04.747056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:35.960 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:35.960 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.960 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.960 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.961 "name": "raid_bdev1", 00:18:35.961 "uuid": "b61e8ba0-b591-441c-acbe-0a93ad92503f", 00:18:35.961 "strip_size_kb": 64, 00:18:35.961 "state": "online", 00:18:35.961 "raid_level": "concat", 00:18:35.961 "superblock": true, 00:18:35.961 "num_base_bdevs": 4, 00:18:35.961 "num_base_bdevs_discovered": 4, 00:18:35.961 "num_base_bdevs_operational": 4, 00:18:35.961 "base_bdevs_list": [ 00:18:35.961 { 00:18:35.961 "name": "BaseBdev1", 00:18:35.961 "uuid": "7ef85c31-9e2f-5834-85e8-1ea858050178", 00:18:35.961 "is_configured": true, 00:18:35.961 "data_offset": 2048, 00:18:35.961 "data_size": 63488 00:18:35.961 }, 00:18:35.961 { 00:18:35.961 "name": "BaseBdev2", 00:18:35.961 "uuid": "740d8000-384e-5f2a-9a7b-e3c08f7225da", 00:18:35.961 "is_configured": true, 00:18:35.961 "data_offset": 2048, 00:18:35.961 "data_size": 63488 00:18:35.961 }, 00:18:35.961 { 00:18:35.961 "name": "BaseBdev3", 00:18:35.961 "uuid": "1daa8d94-1510-5af6-a155-58069759cc65", 00:18:35.961 "is_configured": true, 00:18:35.961 "data_offset": 2048, 00:18:35.961 "data_size": 63488 00:18:35.961 }, 00:18:35.961 { 00:18:35.961 "name": "BaseBdev4", 00:18:35.961 "uuid": "004d79d2-00ab-5ed3-9057-7b0d6afcad35", 00:18:35.961 "is_configured": true, 00:18:35.961 "data_offset": 2048, 00:18:35.961 "data_size": 63488 00:18:35.961 } 00:18:35.961 ] 00:18:35.961 }' 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.961 14:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.528 [2024-11-04 14:51:06.260433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.528 [2024-11-04 14:51:06.260477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.528 [2024-11-04 14:51:06.264201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.528 [2024-11-04 14:51:06.264433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.528 [2024-11-04 14:51:06.264549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.528 [2024-11-04 14:51:06.264786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.528 { 00:18:36.528 "results": [ 00:18:36.528 { 00:18:36.528 "job": "raid_bdev1", 00:18:36.528 "core_mask": "0x1", 00:18:36.528 "workload": "randrw", 00:18:36.528 "percentage": 50, 00:18:36.528 "status": "finished", 00:18:36.528 "queue_depth": 1, 00:18:36.528 "io_size": 131072, 00:18:36.528 "runtime": 1.509778, 00:18:36.528 "iops": 9007.94686371109, 00:18:36.528 "mibps": 1125.9933579638862, 00:18:36.528 "io_failed": 1, 00:18:36.528 "io_timeout": 0, 00:18:36.528 "avg_latency_us": 156.8641877936783, 00:18:36.528 "min_latency_us": 38.86545454545455, 00:18:36.528 "max_latency_us": 1921.3963636363637 00:18:36.528 } 00:18:36.528 ], 00:18:36.528 "core_count": 1 00:18:36.528 } 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73324 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73324 ']' 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73324 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73324 00:18:36.528 killing process with pid 73324 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73324' 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73324 00:18:36.528 [2024-11-04 14:51:06.299104] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.528 14:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73324 00:18:36.787 [2024-11-04 14:51:06.615998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.naNlYxHW0r 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.66 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:38.176 ************************************ 00:18:38.176 END TEST raid_write_error_test 00:18:38.176 ************************************ 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.66 != \0\.\0\0 ]] 00:18:38.176 00:18:38.176 real 0m5.228s 00:18:38.176 user 0m6.459s 00:18:38.176 sys 0m0.687s 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:38.176 14:51:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 14:51:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:38.176 14:51:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:38.176 14:51:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:38.176 14:51:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:38.176 14:51:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 ************************************ 00:18:38.176 START TEST raid_state_function_test 00:18:38.176 ************************************ 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73477 00:18:38.176 Process raid pid: 73477 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73477' 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73477 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73477 ']' 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:38.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:38.176 14:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 [2024-11-04 14:51:08.007767] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:18:38.176 [2024-11-04 14:51:08.007987] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.434 [2024-11-04 14:51:08.201243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.693 [2024-11-04 14:51:08.346182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.950 [2024-11-04 14:51:08.585169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.950 [2024-11-04 14:51:08.585262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.209 [2024-11-04 14:51:08.962389] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.209 [2024-11-04 14:51:08.962495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.209 [2024-11-04 14:51:08.962514] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.209 [2024-11-04 14:51:08.962532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.209 [2024-11-04 14:51:08.962543] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.209 [2024-11-04 14:51:08.962559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.209 [2024-11-04 14:51:08.962569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:39.209 [2024-11-04 14:51:08.962583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.209 14:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.209 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.209 "name": "Existed_Raid", 00:18:39.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.209 "strip_size_kb": 0, 00:18:39.209 "state": "configuring", 00:18:39.209 "raid_level": "raid1", 00:18:39.209 "superblock": false, 00:18:39.209 "num_base_bdevs": 4, 00:18:39.209 "num_base_bdevs_discovered": 0, 00:18:39.209 "num_base_bdevs_operational": 4, 00:18:39.209 "base_bdevs_list": [ 00:18:39.209 { 00:18:39.209 "name": "BaseBdev1", 00:18:39.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.209 "is_configured": false, 00:18:39.209 "data_offset": 0, 00:18:39.209 "data_size": 0 00:18:39.209 }, 00:18:39.209 { 00:18:39.209 "name": "BaseBdev2", 00:18:39.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.209 "is_configured": false, 00:18:39.209 "data_offset": 0, 00:18:39.209 "data_size": 0 00:18:39.209 }, 00:18:39.209 { 00:18:39.209 "name": "BaseBdev3", 00:18:39.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.209 "is_configured": false, 00:18:39.209 "data_offset": 0, 00:18:39.209 "data_size": 0 00:18:39.209 }, 00:18:39.209 { 00:18:39.209 "name": "BaseBdev4", 00:18:39.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.209 "is_configured": false, 00:18:39.209 "data_offset": 0, 00:18:39.209 "data_size": 0 00:18:39.209 } 00:18:39.209 ] 00:18:39.210 }' 00:18:39.210 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.210 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 [2024-11-04 14:51:09.502630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.777 [2024-11-04 14:51:09.502716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 [2024-11-04 14:51:09.510515] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.777 [2024-11-04 14:51:09.510588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.777 [2024-11-04 14:51:09.510634] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.777 [2024-11-04 14:51:09.510650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.777 [2024-11-04 14:51:09.510660] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.777 [2024-11-04 14:51:09.510674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.777 [2024-11-04 14:51:09.510683] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:39.777 [2024-11-04 14:51:09.510698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 [2024-11-04 14:51:09.562765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.777 BaseBdev1 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 [ 00:18:39.777 { 00:18:39.777 "name": "BaseBdev1", 00:18:39.777 "aliases": [ 00:18:39.777 "86045e6a-2773-4f3b-8351-8f6bc96e0a1d" 00:18:39.777 ], 00:18:39.777 "product_name": "Malloc disk", 00:18:39.777 "block_size": 512, 00:18:39.777 "num_blocks": 65536, 00:18:39.777 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:39.777 "assigned_rate_limits": { 00:18:39.777 "rw_ios_per_sec": 0, 00:18:39.777 "rw_mbytes_per_sec": 0, 00:18:39.777 "r_mbytes_per_sec": 0, 00:18:39.777 "w_mbytes_per_sec": 0 00:18:39.777 }, 00:18:39.777 "claimed": true, 00:18:39.777 "claim_type": "exclusive_write", 00:18:39.777 "zoned": false, 00:18:39.777 "supported_io_types": { 00:18:39.777 "read": true, 00:18:39.777 "write": true, 00:18:39.777 "unmap": true, 00:18:39.777 "flush": true, 00:18:39.777 "reset": true, 00:18:39.777 "nvme_admin": false, 00:18:39.777 "nvme_io": false, 00:18:39.777 "nvme_io_md": false, 00:18:39.777 "write_zeroes": true, 00:18:39.777 "zcopy": true, 00:18:39.777 "get_zone_info": false, 00:18:39.777 "zone_management": false, 00:18:39.777 "zone_append": false, 00:18:39.777 "compare": false, 00:18:39.777 "compare_and_write": false, 00:18:39.777 "abort": true, 00:18:39.777 "seek_hole": false, 00:18:39.777 "seek_data": false, 00:18:39.777 "copy": true, 00:18:39.777 "nvme_iov_md": false 00:18:39.777 }, 00:18:39.777 "memory_domains": [ 00:18:39.777 { 00:18:39.777 "dma_device_id": "system", 00:18:39.777 "dma_device_type": 1 00:18:39.777 }, 00:18:39.777 { 00:18:39.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.777 "dma_device_type": 2 00:18:39.777 } 00:18:39.777 ], 00:18:39.777 "driver_specific": {} 00:18:39.777 } 00:18:39.777 ] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.777 "name": "Existed_Raid", 00:18:39.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.777 "strip_size_kb": 0, 00:18:39.777 "state": "configuring", 00:18:39.777 "raid_level": "raid1", 00:18:39.777 "superblock": false, 00:18:39.777 "num_base_bdevs": 4, 00:18:39.777 "num_base_bdevs_discovered": 1, 00:18:39.777 "num_base_bdevs_operational": 4, 00:18:39.777 "base_bdevs_list": [ 00:18:39.777 { 00:18:39.777 "name": "BaseBdev1", 00:18:39.777 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:39.777 "is_configured": true, 00:18:39.777 "data_offset": 0, 00:18:39.777 "data_size": 65536 00:18:39.777 }, 00:18:39.777 { 00:18:39.777 "name": "BaseBdev2", 00:18:39.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.777 "is_configured": false, 00:18:39.777 "data_offset": 0, 00:18:39.777 "data_size": 0 00:18:39.777 }, 00:18:39.777 { 00:18:39.777 "name": "BaseBdev3", 00:18:39.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.777 "is_configured": false, 00:18:39.777 "data_offset": 0, 00:18:39.777 "data_size": 0 00:18:39.777 }, 00:18:39.777 { 00:18:39.777 "name": "BaseBdev4", 00:18:39.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.777 "is_configured": false, 00:18:39.777 "data_offset": 0, 00:18:39.777 "data_size": 0 00:18:39.777 } 00:18:39.777 ] 00:18:39.777 }' 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.777 14:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.343 [2024-11-04 14:51:10.074977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.343 [2024-11-04 14:51:10.075069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.343 [2024-11-04 14:51:10.083002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.343 [2024-11-04 14:51:10.085816] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.343 [2024-11-04 14:51:10.085874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.343 [2024-11-04 14:51:10.085892] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.343 [2024-11-04 14:51:10.085909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.343 [2024-11-04 14:51:10.085920] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:40.343 [2024-11-04 14:51:10.085934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.343 "name": "Existed_Raid", 00:18:40.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.343 "strip_size_kb": 0, 00:18:40.343 "state": "configuring", 00:18:40.343 "raid_level": "raid1", 00:18:40.343 "superblock": false, 00:18:40.343 "num_base_bdevs": 4, 00:18:40.343 "num_base_bdevs_discovered": 1, 00:18:40.343 "num_base_bdevs_operational": 4, 00:18:40.343 "base_bdevs_list": [ 00:18:40.343 { 00:18:40.343 "name": "BaseBdev1", 00:18:40.343 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:40.343 "is_configured": true, 00:18:40.343 "data_offset": 0, 00:18:40.343 "data_size": 65536 00:18:40.343 }, 00:18:40.343 { 00:18:40.343 "name": "BaseBdev2", 00:18:40.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.343 "is_configured": false, 00:18:40.343 "data_offset": 0, 00:18:40.343 "data_size": 0 00:18:40.343 }, 00:18:40.343 { 00:18:40.343 "name": "BaseBdev3", 00:18:40.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.343 "is_configured": false, 00:18:40.343 "data_offset": 0, 00:18:40.343 "data_size": 0 00:18:40.343 }, 00:18:40.343 { 00:18:40.343 "name": "BaseBdev4", 00:18:40.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.343 "is_configured": false, 00:18:40.343 "data_offset": 0, 00:18:40.343 "data_size": 0 00:18:40.343 } 00:18:40.343 ] 00:18:40.343 }' 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.343 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.910 [2024-11-04 14:51:10.610413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.910 BaseBdev2 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.910 [ 00:18:40.910 { 00:18:40.910 "name": "BaseBdev2", 00:18:40.910 "aliases": [ 00:18:40.910 "807e0b8f-e566-4782-bf91-06a0e7eb8ff5" 00:18:40.910 ], 00:18:40.910 "product_name": "Malloc disk", 00:18:40.910 "block_size": 512, 00:18:40.910 "num_blocks": 65536, 00:18:40.910 "uuid": "807e0b8f-e566-4782-bf91-06a0e7eb8ff5", 00:18:40.910 "assigned_rate_limits": { 00:18:40.910 "rw_ios_per_sec": 0, 00:18:40.910 "rw_mbytes_per_sec": 0, 00:18:40.910 "r_mbytes_per_sec": 0, 00:18:40.910 "w_mbytes_per_sec": 0 00:18:40.910 }, 00:18:40.910 "claimed": true, 00:18:40.910 "claim_type": "exclusive_write", 00:18:40.910 "zoned": false, 00:18:40.910 "supported_io_types": { 00:18:40.910 "read": true, 00:18:40.910 "write": true, 00:18:40.910 "unmap": true, 00:18:40.910 "flush": true, 00:18:40.910 "reset": true, 00:18:40.910 "nvme_admin": false, 00:18:40.910 "nvme_io": false, 00:18:40.910 "nvme_io_md": false, 00:18:40.910 "write_zeroes": true, 00:18:40.910 "zcopy": true, 00:18:40.910 "get_zone_info": false, 00:18:40.910 "zone_management": false, 00:18:40.910 "zone_append": false, 00:18:40.910 "compare": false, 00:18:40.910 "compare_and_write": false, 00:18:40.910 "abort": true, 00:18:40.910 "seek_hole": false, 00:18:40.910 "seek_data": false, 00:18:40.910 "copy": true, 00:18:40.910 "nvme_iov_md": false 00:18:40.910 }, 00:18:40.910 "memory_domains": [ 00:18:40.910 { 00:18:40.910 "dma_device_id": "system", 00:18:40.910 "dma_device_type": 1 00:18:40.910 }, 00:18:40.910 { 00:18:40.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.910 "dma_device_type": 2 00:18:40.910 } 00:18:40.910 ], 00:18:40.910 "driver_specific": {} 00:18:40.910 } 00:18:40.910 ] 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.910 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.911 "name": "Existed_Raid", 00:18:40.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.911 "strip_size_kb": 0, 00:18:40.911 "state": "configuring", 00:18:40.911 "raid_level": "raid1", 00:18:40.911 "superblock": false, 00:18:40.911 "num_base_bdevs": 4, 00:18:40.911 "num_base_bdevs_discovered": 2, 00:18:40.911 "num_base_bdevs_operational": 4, 00:18:40.911 "base_bdevs_list": [ 00:18:40.911 { 00:18:40.911 "name": "BaseBdev1", 00:18:40.911 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:40.911 "is_configured": true, 00:18:40.911 "data_offset": 0, 00:18:40.911 "data_size": 65536 00:18:40.911 }, 00:18:40.911 { 00:18:40.911 "name": "BaseBdev2", 00:18:40.911 "uuid": "807e0b8f-e566-4782-bf91-06a0e7eb8ff5", 00:18:40.911 "is_configured": true, 00:18:40.911 "data_offset": 0, 00:18:40.911 "data_size": 65536 00:18:40.911 }, 00:18:40.911 { 00:18:40.911 "name": "BaseBdev3", 00:18:40.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.911 "is_configured": false, 00:18:40.911 "data_offset": 0, 00:18:40.911 "data_size": 0 00:18:40.911 }, 00:18:40.911 { 00:18:40.911 "name": "BaseBdev4", 00:18:40.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.911 "is_configured": false, 00:18:40.911 "data_offset": 0, 00:18:40.911 "data_size": 0 00:18:40.911 } 00:18:40.911 ] 00:18:40.911 }' 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.911 14:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 [2024-11-04 14:51:11.208686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:41.478 BaseBdev3 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 [ 00:18:41.478 { 00:18:41.478 "name": "BaseBdev3", 00:18:41.478 "aliases": [ 00:18:41.478 "c2533967-50f7-4e57-968d-328d6c2c3f81" 00:18:41.478 ], 00:18:41.478 "product_name": "Malloc disk", 00:18:41.478 "block_size": 512, 00:18:41.478 "num_blocks": 65536, 00:18:41.478 "uuid": "c2533967-50f7-4e57-968d-328d6c2c3f81", 00:18:41.478 "assigned_rate_limits": { 00:18:41.478 "rw_ios_per_sec": 0, 00:18:41.478 "rw_mbytes_per_sec": 0, 00:18:41.478 "r_mbytes_per_sec": 0, 00:18:41.478 "w_mbytes_per_sec": 0 00:18:41.478 }, 00:18:41.478 "claimed": true, 00:18:41.478 "claim_type": "exclusive_write", 00:18:41.478 "zoned": false, 00:18:41.478 "supported_io_types": { 00:18:41.478 "read": true, 00:18:41.478 "write": true, 00:18:41.478 "unmap": true, 00:18:41.478 "flush": true, 00:18:41.478 "reset": true, 00:18:41.478 "nvme_admin": false, 00:18:41.478 "nvme_io": false, 00:18:41.478 "nvme_io_md": false, 00:18:41.478 "write_zeroes": true, 00:18:41.478 "zcopy": true, 00:18:41.478 "get_zone_info": false, 00:18:41.478 "zone_management": false, 00:18:41.478 "zone_append": false, 00:18:41.478 "compare": false, 00:18:41.478 "compare_and_write": false, 00:18:41.478 "abort": true, 00:18:41.478 "seek_hole": false, 00:18:41.478 "seek_data": false, 00:18:41.478 "copy": true, 00:18:41.478 "nvme_iov_md": false 00:18:41.478 }, 00:18:41.478 "memory_domains": [ 00:18:41.478 { 00:18:41.478 "dma_device_id": "system", 00:18:41.478 "dma_device_type": 1 00:18:41.478 }, 00:18:41.478 { 00:18:41.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.478 "dma_device_type": 2 00:18:41.478 } 00:18:41.478 ], 00:18:41.478 "driver_specific": {} 00:18:41.478 } 00:18:41.478 ] 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.478 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.478 "name": "Existed_Raid", 00:18:41.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.478 "strip_size_kb": 0, 00:18:41.478 "state": "configuring", 00:18:41.478 "raid_level": "raid1", 00:18:41.478 "superblock": false, 00:18:41.478 "num_base_bdevs": 4, 00:18:41.478 "num_base_bdevs_discovered": 3, 00:18:41.478 "num_base_bdevs_operational": 4, 00:18:41.478 "base_bdevs_list": [ 00:18:41.478 { 00:18:41.478 "name": "BaseBdev1", 00:18:41.478 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:41.478 "is_configured": true, 00:18:41.478 "data_offset": 0, 00:18:41.478 "data_size": 65536 00:18:41.478 }, 00:18:41.478 { 00:18:41.478 "name": "BaseBdev2", 00:18:41.478 "uuid": "807e0b8f-e566-4782-bf91-06a0e7eb8ff5", 00:18:41.478 "is_configured": true, 00:18:41.478 "data_offset": 0, 00:18:41.478 "data_size": 65536 00:18:41.478 }, 00:18:41.478 { 00:18:41.479 "name": "BaseBdev3", 00:18:41.479 "uuid": "c2533967-50f7-4e57-968d-328d6c2c3f81", 00:18:41.479 "is_configured": true, 00:18:41.479 "data_offset": 0, 00:18:41.479 "data_size": 65536 00:18:41.479 }, 00:18:41.479 { 00:18:41.479 "name": "BaseBdev4", 00:18:41.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.479 "is_configured": false, 00:18:41.479 "data_offset": 0, 00:18:41.479 "data_size": 0 00:18:41.479 } 00:18:41.479 ] 00:18:41.479 }' 00:18:41.479 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.479 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.080 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.080 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.080 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.080 [2024-11-04 14:51:11.786656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.080 [2024-11-04 14:51:11.786723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:42.080 [2024-11-04 14:51:11.786736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:42.080 [2024-11-04 14:51:11.787156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:42.081 [2024-11-04 14:51:11.787432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:42.081 [2024-11-04 14:51:11.787473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:42.081 BaseBdev4 00:18:42.081 [2024-11-04 14:51:11.787830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.081 [ 00:18:42.081 { 00:18:42.081 "name": "BaseBdev4", 00:18:42.081 "aliases": [ 00:18:42.081 "925f8b86-1a0e-4a42-8c81-5407e382dcb0" 00:18:42.081 ], 00:18:42.081 "product_name": "Malloc disk", 00:18:42.081 "block_size": 512, 00:18:42.081 "num_blocks": 65536, 00:18:42.081 "uuid": "925f8b86-1a0e-4a42-8c81-5407e382dcb0", 00:18:42.081 "assigned_rate_limits": { 00:18:42.081 "rw_ios_per_sec": 0, 00:18:42.081 "rw_mbytes_per_sec": 0, 00:18:42.081 "r_mbytes_per_sec": 0, 00:18:42.081 "w_mbytes_per_sec": 0 00:18:42.081 }, 00:18:42.081 "claimed": true, 00:18:42.081 "claim_type": "exclusive_write", 00:18:42.081 "zoned": false, 00:18:42.081 "supported_io_types": { 00:18:42.081 "read": true, 00:18:42.081 "write": true, 00:18:42.081 "unmap": true, 00:18:42.081 "flush": true, 00:18:42.081 "reset": true, 00:18:42.081 "nvme_admin": false, 00:18:42.081 "nvme_io": false, 00:18:42.081 "nvme_io_md": false, 00:18:42.081 "write_zeroes": true, 00:18:42.081 "zcopy": true, 00:18:42.081 "get_zone_info": false, 00:18:42.081 "zone_management": false, 00:18:42.081 "zone_append": false, 00:18:42.081 "compare": false, 00:18:42.081 "compare_and_write": false, 00:18:42.081 "abort": true, 00:18:42.081 "seek_hole": false, 00:18:42.081 "seek_data": false, 00:18:42.081 "copy": true, 00:18:42.081 "nvme_iov_md": false 00:18:42.081 }, 00:18:42.081 "memory_domains": [ 00:18:42.081 { 00:18:42.081 "dma_device_id": "system", 00:18:42.081 "dma_device_type": 1 00:18:42.081 }, 00:18:42.081 { 00:18:42.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.081 "dma_device_type": 2 00:18:42.081 } 00:18:42.081 ], 00:18:42.081 "driver_specific": {} 00:18:42.081 } 00:18:42.081 ] 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.081 "name": "Existed_Raid", 00:18:42.081 "uuid": "f5e5f2d9-ef23-494e-8882-512ccb9f1cd0", 00:18:42.081 "strip_size_kb": 0, 00:18:42.081 "state": "online", 00:18:42.081 "raid_level": "raid1", 00:18:42.081 "superblock": false, 00:18:42.081 "num_base_bdevs": 4, 00:18:42.081 "num_base_bdevs_discovered": 4, 00:18:42.081 "num_base_bdevs_operational": 4, 00:18:42.081 "base_bdevs_list": [ 00:18:42.081 { 00:18:42.081 "name": "BaseBdev1", 00:18:42.081 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:42.081 "is_configured": true, 00:18:42.081 "data_offset": 0, 00:18:42.081 "data_size": 65536 00:18:42.081 }, 00:18:42.081 { 00:18:42.081 "name": "BaseBdev2", 00:18:42.081 "uuid": "807e0b8f-e566-4782-bf91-06a0e7eb8ff5", 00:18:42.081 "is_configured": true, 00:18:42.081 "data_offset": 0, 00:18:42.081 "data_size": 65536 00:18:42.081 }, 00:18:42.081 { 00:18:42.081 "name": "BaseBdev3", 00:18:42.081 "uuid": "c2533967-50f7-4e57-968d-328d6c2c3f81", 00:18:42.081 "is_configured": true, 00:18:42.081 "data_offset": 0, 00:18:42.081 "data_size": 65536 00:18:42.081 }, 00:18:42.081 { 00:18:42.081 "name": "BaseBdev4", 00:18:42.081 "uuid": "925f8b86-1a0e-4a42-8c81-5407e382dcb0", 00:18:42.081 "is_configured": true, 00:18:42.081 "data_offset": 0, 00:18:42.081 "data_size": 65536 00:18:42.081 } 00:18:42.081 ] 00:18:42.081 }' 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.081 14:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.648 [2024-11-04 14:51:12.335421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.648 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.648 "name": "Existed_Raid", 00:18:42.648 "aliases": [ 00:18:42.648 "f5e5f2d9-ef23-494e-8882-512ccb9f1cd0" 00:18:42.648 ], 00:18:42.648 "product_name": "Raid Volume", 00:18:42.648 "block_size": 512, 00:18:42.648 "num_blocks": 65536, 00:18:42.648 "uuid": "f5e5f2d9-ef23-494e-8882-512ccb9f1cd0", 00:18:42.648 "assigned_rate_limits": { 00:18:42.648 "rw_ios_per_sec": 0, 00:18:42.648 "rw_mbytes_per_sec": 0, 00:18:42.648 "r_mbytes_per_sec": 0, 00:18:42.648 "w_mbytes_per_sec": 0 00:18:42.648 }, 00:18:42.648 "claimed": false, 00:18:42.648 "zoned": false, 00:18:42.648 "supported_io_types": { 00:18:42.648 "read": true, 00:18:42.648 "write": true, 00:18:42.648 "unmap": false, 00:18:42.648 "flush": false, 00:18:42.648 "reset": true, 00:18:42.648 "nvme_admin": false, 00:18:42.648 "nvme_io": false, 00:18:42.648 "nvme_io_md": false, 00:18:42.648 "write_zeroes": true, 00:18:42.648 "zcopy": false, 00:18:42.648 "get_zone_info": false, 00:18:42.648 "zone_management": false, 00:18:42.648 "zone_append": false, 00:18:42.648 "compare": false, 00:18:42.648 "compare_and_write": false, 00:18:42.648 "abort": false, 00:18:42.648 "seek_hole": false, 00:18:42.648 "seek_data": false, 00:18:42.648 "copy": false, 00:18:42.648 "nvme_iov_md": false 00:18:42.648 }, 00:18:42.648 "memory_domains": [ 00:18:42.648 { 00:18:42.648 "dma_device_id": "system", 00:18:42.648 "dma_device_type": 1 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.648 "dma_device_type": 2 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "system", 00:18:42.648 "dma_device_type": 1 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.648 "dma_device_type": 2 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "system", 00:18:42.648 "dma_device_type": 1 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.648 "dma_device_type": 2 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "system", 00:18:42.648 "dma_device_type": 1 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.648 "dma_device_type": 2 00:18:42.648 } 00:18:42.648 ], 00:18:42.648 "driver_specific": { 00:18:42.648 "raid": { 00:18:42.648 "uuid": "f5e5f2d9-ef23-494e-8882-512ccb9f1cd0", 00:18:42.648 "strip_size_kb": 0, 00:18:42.648 "state": "online", 00:18:42.648 "raid_level": "raid1", 00:18:42.649 "superblock": false, 00:18:42.649 "num_base_bdevs": 4, 00:18:42.649 "num_base_bdevs_discovered": 4, 00:18:42.649 "num_base_bdevs_operational": 4, 00:18:42.649 "base_bdevs_list": [ 00:18:42.649 { 00:18:42.649 "name": "BaseBdev1", 00:18:42.649 "uuid": "86045e6a-2773-4f3b-8351-8f6bc96e0a1d", 00:18:42.649 "is_configured": true, 00:18:42.649 "data_offset": 0, 00:18:42.649 "data_size": 65536 00:18:42.649 }, 00:18:42.649 { 00:18:42.649 "name": "BaseBdev2", 00:18:42.649 "uuid": "807e0b8f-e566-4782-bf91-06a0e7eb8ff5", 00:18:42.649 "is_configured": true, 00:18:42.649 "data_offset": 0, 00:18:42.649 "data_size": 65536 00:18:42.649 }, 00:18:42.649 { 00:18:42.649 "name": "BaseBdev3", 00:18:42.649 "uuid": "c2533967-50f7-4e57-968d-328d6c2c3f81", 00:18:42.649 "is_configured": true, 00:18:42.649 "data_offset": 0, 00:18:42.649 "data_size": 65536 00:18:42.649 }, 00:18:42.649 { 00:18:42.649 "name": "BaseBdev4", 00:18:42.649 "uuid": "925f8b86-1a0e-4a42-8c81-5407e382dcb0", 00:18:42.649 "is_configured": true, 00:18:42.649 "data_offset": 0, 00:18:42.649 "data_size": 65536 00:18:42.649 } 00:18:42.649 ] 00:18:42.649 } 00:18:42.649 } 00:18:42.649 }' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:42.649 BaseBdev2 00:18:42.649 BaseBdev3 00:18:42.649 BaseBdev4' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.649 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.908 [2024-11-04 14:51:12.679194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.908 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.166 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.166 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.166 "name": "Existed_Raid", 00:18:43.166 "uuid": "f5e5f2d9-ef23-494e-8882-512ccb9f1cd0", 00:18:43.166 "strip_size_kb": 0, 00:18:43.166 "state": "online", 00:18:43.166 "raid_level": "raid1", 00:18:43.166 "superblock": false, 00:18:43.166 "num_base_bdevs": 4, 00:18:43.166 "num_base_bdevs_discovered": 3, 00:18:43.166 "num_base_bdevs_operational": 3, 00:18:43.166 "base_bdevs_list": [ 00:18:43.166 { 00:18:43.166 "name": null, 00:18:43.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.167 "is_configured": false, 00:18:43.167 "data_offset": 0, 00:18:43.167 "data_size": 65536 00:18:43.167 }, 00:18:43.167 { 00:18:43.167 "name": "BaseBdev2", 00:18:43.167 "uuid": "807e0b8f-e566-4782-bf91-06a0e7eb8ff5", 00:18:43.167 "is_configured": true, 00:18:43.167 "data_offset": 0, 00:18:43.167 "data_size": 65536 00:18:43.167 }, 00:18:43.167 { 00:18:43.167 "name": "BaseBdev3", 00:18:43.167 "uuid": "c2533967-50f7-4e57-968d-328d6c2c3f81", 00:18:43.167 "is_configured": true, 00:18:43.167 "data_offset": 0, 00:18:43.167 "data_size": 65536 00:18:43.167 }, 00:18:43.167 { 00:18:43.167 "name": "BaseBdev4", 00:18:43.167 "uuid": "925f8b86-1a0e-4a42-8c81-5407e382dcb0", 00:18:43.167 "is_configured": true, 00:18:43.167 "data_offset": 0, 00:18:43.167 "data_size": 65536 00:18:43.167 } 00:18:43.167 ] 00:18:43.167 }' 00:18:43.167 14:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.167 14:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.425 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.683 [2024-11-04 14:51:13.321963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.683 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.684 [2024-11-04 14:51:13.470172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.684 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.942 [2024-11-04 14:51:13.632586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:43.942 [2024-11-04 14:51:13.633067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.942 [2024-11-04 14:51:13.729168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.942 [2024-11-04 14:51:13.729521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.942 [2024-11-04 14:51:13.729731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.942 BaseBdev2 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.942 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 [ 00:18:44.201 { 00:18:44.201 "name": "BaseBdev2", 00:18:44.201 "aliases": [ 00:18:44.201 "15054ff5-1f63-4bfb-b412-d830fe2bc020" 00:18:44.201 ], 00:18:44.201 "product_name": "Malloc disk", 00:18:44.201 "block_size": 512, 00:18:44.201 "num_blocks": 65536, 00:18:44.201 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:44.201 "assigned_rate_limits": { 00:18:44.201 "rw_ios_per_sec": 0, 00:18:44.201 "rw_mbytes_per_sec": 0, 00:18:44.201 "r_mbytes_per_sec": 0, 00:18:44.201 "w_mbytes_per_sec": 0 00:18:44.201 }, 00:18:44.201 "claimed": false, 00:18:44.201 "zoned": false, 00:18:44.201 "supported_io_types": { 00:18:44.201 "read": true, 00:18:44.201 "write": true, 00:18:44.201 "unmap": true, 00:18:44.201 "flush": true, 00:18:44.201 "reset": true, 00:18:44.201 "nvme_admin": false, 00:18:44.201 "nvme_io": false, 00:18:44.201 "nvme_io_md": false, 00:18:44.201 "write_zeroes": true, 00:18:44.201 "zcopy": true, 00:18:44.201 "get_zone_info": false, 00:18:44.201 "zone_management": false, 00:18:44.201 "zone_append": false, 00:18:44.201 "compare": false, 00:18:44.201 "compare_and_write": false, 00:18:44.201 "abort": true, 00:18:44.201 "seek_hole": false, 00:18:44.201 "seek_data": false, 00:18:44.201 "copy": true, 00:18:44.201 "nvme_iov_md": false 00:18:44.201 }, 00:18:44.201 "memory_domains": [ 00:18:44.201 { 00:18:44.201 "dma_device_id": "system", 00:18:44.201 "dma_device_type": 1 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.201 "dma_device_type": 2 00:18:44.201 } 00:18:44.201 ], 00:18:44.201 "driver_specific": {} 00:18:44.201 } 00:18:44.201 ] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 BaseBdev3 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 [ 00:18:44.201 { 00:18:44.201 "name": "BaseBdev3", 00:18:44.201 "aliases": [ 00:18:44.201 "0b79d8f2-5471-49ce-ac9b-c32c545f85d6" 00:18:44.201 ], 00:18:44.201 "product_name": "Malloc disk", 00:18:44.201 "block_size": 512, 00:18:44.201 "num_blocks": 65536, 00:18:44.201 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:44.201 "assigned_rate_limits": { 00:18:44.201 "rw_ios_per_sec": 0, 00:18:44.201 "rw_mbytes_per_sec": 0, 00:18:44.201 "r_mbytes_per_sec": 0, 00:18:44.201 "w_mbytes_per_sec": 0 00:18:44.201 }, 00:18:44.201 "claimed": false, 00:18:44.201 "zoned": false, 00:18:44.201 "supported_io_types": { 00:18:44.201 "read": true, 00:18:44.201 "write": true, 00:18:44.201 "unmap": true, 00:18:44.201 "flush": true, 00:18:44.201 "reset": true, 00:18:44.201 "nvme_admin": false, 00:18:44.201 "nvme_io": false, 00:18:44.201 "nvme_io_md": false, 00:18:44.201 "write_zeroes": true, 00:18:44.201 "zcopy": true, 00:18:44.201 "get_zone_info": false, 00:18:44.201 "zone_management": false, 00:18:44.201 "zone_append": false, 00:18:44.201 "compare": false, 00:18:44.201 "compare_and_write": false, 00:18:44.201 "abort": true, 00:18:44.201 "seek_hole": false, 00:18:44.201 "seek_data": false, 00:18:44.201 "copy": true, 00:18:44.201 "nvme_iov_md": false 00:18:44.201 }, 00:18:44.201 "memory_domains": [ 00:18:44.201 { 00:18:44.201 "dma_device_id": "system", 00:18:44.201 "dma_device_type": 1 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.201 "dma_device_type": 2 00:18:44.201 } 00:18:44.201 ], 00:18:44.201 "driver_specific": {} 00:18:44.201 } 00:18:44.201 ] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 BaseBdev4 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.201 14:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 [ 00:18:44.201 { 00:18:44.201 "name": "BaseBdev4", 00:18:44.201 "aliases": [ 00:18:44.201 "c881ee13-9c02-44a0-b73b-36e1f26dc4ba" 00:18:44.201 ], 00:18:44.201 "product_name": "Malloc disk", 00:18:44.201 "block_size": 512, 00:18:44.201 "num_blocks": 65536, 00:18:44.201 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:44.201 "assigned_rate_limits": { 00:18:44.201 "rw_ios_per_sec": 0, 00:18:44.201 "rw_mbytes_per_sec": 0, 00:18:44.201 "r_mbytes_per_sec": 0, 00:18:44.201 "w_mbytes_per_sec": 0 00:18:44.201 }, 00:18:44.201 "claimed": false, 00:18:44.201 "zoned": false, 00:18:44.201 "supported_io_types": { 00:18:44.201 "read": true, 00:18:44.202 "write": true, 00:18:44.202 "unmap": true, 00:18:44.202 "flush": true, 00:18:44.202 "reset": true, 00:18:44.202 "nvme_admin": false, 00:18:44.202 "nvme_io": false, 00:18:44.202 "nvme_io_md": false, 00:18:44.202 "write_zeroes": true, 00:18:44.202 "zcopy": true, 00:18:44.202 "get_zone_info": false, 00:18:44.202 "zone_management": false, 00:18:44.202 "zone_append": false, 00:18:44.202 "compare": false, 00:18:44.202 "compare_and_write": false, 00:18:44.202 "abort": true, 00:18:44.202 "seek_hole": false, 00:18:44.202 "seek_data": false, 00:18:44.202 "copy": true, 00:18:44.202 "nvme_iov_md": false 00:18:44.202 }, 00:18:44.202 "memory_domains": [ 00:18:44.202 { 00:18:44.202 "dma_device_id": "system", 00:18:44.202 "dma_device_type": 1 00:18:44.202 }, 00:18:44.202 { 00:18:44.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.202 "dma_device_type": 2 00:18:44.202 } 00:18:44.202 ], 00:18:44.202 "driver_specific": {} 00:18:44.202 } 00:18:44.202 ] 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 [2024-11-04 14:51:14.018145] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.202 [2024-11-04 14:51:14.018600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.202 [2024-11-04 14:51:14.018747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.202 [2024-11-04 14:51:14.021739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.202 [2024-11-04 14:51:14.021814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.202 "name": "Existed_Raid", 00:18:44.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.202 "strip_size_kb": 0, 00:18:44.202 "state": "configuring", 00:18:44.202 "raid_level": "raid1", 00:18:44.202 "superblock": false, 00:18:44.202 "num_base_bdevs": 4, 00:18:44.202 "num_base_bdevs_discovered": 3, 00:18:44.202 "num_base_bdevs_operational": 4, 00:18:44.202 "base_bdevs_list": [ 00:18:44.202 { 00:18:44.202 "name": "BaseBdev1", 00:18:44.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.202 "is_configured": false, 00:18:44.202 "data_offset": 0, 00:18:44.202 "data_size": 0 00:18:44.202 }, 00:18:44.202 { 00:18:44.202 "name": "BaseBdev2", 00:18:44.202 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:44.202 "is_configured": true, 00:18:44.202 "data_offset": 0, 00:18:44.202 "data_size": 65536 00:18:44.202 }, 00:18:44.202 { 00:18:44.202 "name": "BaseBdev3", 00:18:44.202 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:44.202 "is_configured": true, 00:18:44.202 "data_offset": 0, 00:18:44.202 "data_size": 65536 00:18:44.202 }, 00:18:44.202 { 00:18:44.202 "name": "BaseBdev4", 00:18:44.202 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:44.202 "is_configured": true, 00:18:44.202 "data_offset": 0, 00:18:44.202 "data_size": 65536 00:18:44.202 } 00:18:44.202 ] 00:18:44.202 }' 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.202 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 [2024-11-04 14:51:14.542412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.767 "name": "Existed_Raid", 00:18:44.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.767 "strip_size_kb": 0, 00:18:44.767 "state": "configuring", 00:18:44.767 "raid_level": "raid1", 00:18:44.767 "superblock": false, 00:18:44.767 "num_base_bdevs": 4, 00:18:44.767 "num_base_bdevs_discovered": 2, 00:18:44.767 "num_base_bdevs_operational": 4, 00:18:44.767 "base_bdevs_list": [ 00:18:44.767 { 00:18:44.767 "name": "BaseBdev1", 00:18:44.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.767 "is_configured": false, 00:18:44.767 "data_offset": 0, 00:18:44.767 "data_size": 0 00:18:44.767 }, 00:18:44.767 { 00:18:44.767 "name": null, 00:18:44.767 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:44.767 "is_configured": false, 00:18:44.767 "data_offset": 0, 00:18:44.767 "data_size": 65536 00:18:44.767 }, 00:18:44.767 { 00:18:44.767 "name": "BaseBdev3", 00:18:44.767 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:44.767 "is_configured": true, 00:18:44.767 "data_offset": 0, 00:18:44.767 "data_size": 65536 00:18:44.767 }, 00:18:44.767 { 00:18:44.767 "name": "BaseBdev4", 00:18:44.767 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:44.767 "is_configured": true, 00:18:44.767 "data_offset": 0, 00:18:44.767 "data_size": 65536 00:18:44.767 } 00:18:44.767 ] 00:18:44.767 }' 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.767 14:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 [2024-11-04 14:51:15.150592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.333 BaseBdev1 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 [ 00:18:45.333 { 00:18:45.333 "name": "BaseBdev1", 00:18:45.333 "aliases": [ 00:18:45.333 "22f23cf8-a09c-4557-b63d-b42c285fe5f3" 00:18:45.333 ], 00:18:45.333 "product_name": "Malloc disk", 00:18:45.333 "block_size": 512, 00:18:45.333 "num_blocks": 65536, 00:18:45.333 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:45.333 "assigned_rate_limits": { 00:18:45.333 "rw_ios_per_sec": 0, 00:18:45.333 "rw_mbytes_per_sec": 0, 00:18:45.333 "r_mbytes_per_sec": 0, 00:18:45.333 "w_mbytes_per_sec": 0 00:18:45.333 }, 00:18:45.333 "claimed": true, 00:18:45.333 "claim_type": "exclusive_write", 00:18:45.333 "zoned": false, 00:18:45.333 "supported_io_types": { 00:18:45.333 "read": true, 00:18:45.333 "write": true, 00:18:45.333 "unmap": true, 00:18:45.333 "flush": true, 00:18:45.333 "reset": true, 00:18:45.333 "nvme_admin": false, 00:18:45.333 "nvme_io": false, 00:18:45.333 "nvme_io_md": false, 00:18:45.333 "write_zeroes": true, 00:18:45.333 "zcopy": true, 00:18:45.333 "get_zone_info": false, 00:18:45.333 "zone_management": false, 00:18:45.333 "zone_append": false, 00:18:45.333 "compare": false, 00:18:45.333 "compare_and_write": false, 00:18:45.333 "abort": true, 00:18:45.333 "seek_hole": false, 00:18:45.333 "seek_data": false, 00:18:45.333 "copy": true, 00:18:45.333 "nvme_iov_md": false 00:18:45.333 }, 00:18:45.333 "memory_domains": [ 00:18:45.333 { 00:18:45.333 "dma_device_id": "system", 00:18:45.333 "dma_device_type": 1 00:18:45.333 }, 00:18:45.333 { 00:18:45.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.333 "dma_device_type": 2 00:18:45.333 } 00:18:45.333 ], 00:18:45.333 "driver_specific": {} 00:18:45.333 } 00:18:45.333 ] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.594 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.594 "name": "Existed_Raid", 00:18:45.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.594 "strip_size_kb": 0, 00:18:45.594 "state": "configuring", 00:18:45.594 "raid_level": "raid1", 00:18:45.594 "superblock": false, 00:18:45.594 "num_base_bdevs": 4, 00:18:45.594 "num_base_bdevs_discovered": 3, 00:18:45.594 "num_base_bdevs_operational": 4, 00:18:45.594 "base_bdevs_list": [ 00:18:45.594 { 00:18:45.594 "name": "BaseBdev1", 00:18:45.594 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:45.594 "is_configured": true, 00:18:45.594 "data_offset": 0, 00:18:45.594 "data_size": 65536 00:18:45.594 }, 00:18:45.594 { 00:18:45.594 "name": null, 00:18:45.594 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:45.594 "is_configured": false, 00:18:45.594 "data_offset": 0, 00:18:45.594 "data_size": 65536 00:18:45.594 }, 00:18:45.594 { 00:18:45.594 "name": "BaseBdev3", 00:18:45.594 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:45.594 "is_configured": true, 00:18:45.594 "data_offset": 0, 00:18:45.594 "data_size": 65536 00:18:45.594 }, 00:18:45.594 { 00:18:45.594 "name": "BaseBdev4", 00:18:45.594 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:45.594 "is_configured": true, 00:18:45.594 "data_offset": 0, 00:18:45.594 "data_size": 65536 00:18:45.594 } 00:18:45.594 ] 00:18:45.594 }' 00:18:45.594 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.594 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.852 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.111 [2024-11-04 14:51:15.742831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.111 "name": "Existed_Raid", 00:18:46.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.111 "strip_size_kb": 0, 00:18:46.111 "state": "configuring", 00:18:46.111 "raid_level": "raid1", 00:18:46.111 "superblock": false, 00:18:46.111 "num_base_bdevs": 4, 00:18:46.111 "num_base_bdevs_discovered": 2, 00:18:46.111 "num_base_bdevs_operational": 4, 00:18:46.111 "base_bdevs_list": [ 00:18:46.111 { 00:18:46.111 "name": "BaseBdev1", 00:18:46.111 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:46.111 "is_configured": true, 00:18:46.111 "data_offset": 0, 00:18:46.111 "data_size": 65536 00:18:46.111 }, 00:18:46.111 { 00:18:46.111 "name": null, 00:18:46.111 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:46.111 "is_configured": false, 00:18:46.111 "data_offset": 0, 00:18:46.111 "data_size": 65536 00:18:46.111 }, 00:18:46.111 { 00:18:46.111 "name": null, 00:18:46.111 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:46.111 "is_configured": false, 00:18:46.111 "data_offset": 0, 00:18:46.111 "data_size": 65536 00:18:46.111 }, 00:18:46.111 { 00:18:46.111 "name": "BaseBdev4", 00:18:46.111 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:46.111 "is_configured": true, 00:18:46.111 "data_offset": 0, 00:18:46.111 "data_size": 65536 00:18:46.111 } 00:18:46.111 ] 00:18:46.111 }' 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.111 14:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.370 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.370 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:46.370 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.370 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.370 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.629 [2024-11-04 14:51:16.286955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.629 "name": "Existed_Raid", 00:18:46.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.629 "strip_size_kb": 0, 00:18:46.629 "state": "configuring", 00:18:46.629 "raid_level": "raid1", 00:18:46.629 "superblock": false, 00:18:46.629 "num_base_bdevs": 4, 00:18:46.629 "num_base_bdevs_discovered": 3, 00:18:46.629 "num_base_bdevs_operational": 4, 00:18:46.629 "base_bdevs_list": [ 00:18:46.629 { 00:18:46.629 "name": "BaseBdev1", 00:18:46.629 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:46.629 "is_configured": true, 00:18:46.629 "data_offset": 0, 00:18:46.629 "data_size": 65536 00:18:46.629 }, 00:18:46.629 { 00:18:46.629 "name": null, 00:18:46.629 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:46.629 "is_configured": false, 00:18:46.629 "data_offset": 0, 00:18:46.629 "data_size": 65536 00:18:46.629 }, 00:18:46.629 { 00:18:46.629 "name": "BaseBdev3", 00:18:46.629 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:46.629 "is_configured": true, 00:18:46.629 "data_offset": 0, 00:18:46.629 "data_size": 65536 00:18:46.629 }, 00:18:46.629 { 00:18:46.629 "name": "BaseBdev4", 00:18:46.629 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:46.629 "is_configured": true, 00:18:46.629 "data_offset": 0, 00:18:46.629 "data_size": 65536 00:18:46.629 } 00:18:46.629 ] 00:18:46.629 }' 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.629 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.196 [2024-11-04 14:51:16.871171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.196 14:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.196 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.196 "name": "Existed_Raid", 00:18:47.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.196 "strip_size_kb": 0, 00:18:47.196 "state": "configuring", 00:18:47.196 "raid_level": "raid1", 00:18:47.196 "superblock": false, 00:18:47.196 "num_base_bdevs": 4, 00:18:47.196 "num_base_bdevs_discovered": 2, 00:18:47.196 "num_base_bdevs_operational": 4, 00:18:47.196 "base_bdevs_list": [ 00:18:47.196 { 00:18:47.196 "name": null, 00:18:47.196 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:47.196 "is_configured": false, 00:18:47.196 "data_offset": 0, 00:18:47.196 "data_size": 65536 00:18:47.196 }, 00:18:47.196 { 00:18:47.196 "name": null, 00:18:47.196 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:47.196 "is_configured": false, 00:18:47.196 "data_offset": 0, 00:18:47.197 "data_size": 65536 00:18:47.197 }, 00:18:47.197 { 00:18:47.197 "name": "BaseBdev3", 00:18:47.197 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:47.197 "is_configured": true, 00:18:47.197 "data_offset": 0, 00:18:47.197 "data_size": 65536 00:18:47.197 }, 00:18:47.197 { 00:18:47.197 "name": "BaseBdev4", 00:18:47.197 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:47.197 "is_configured": true, 00:18:47.197 "data_offset": 0, 00:18:47.197 "data_size": 65536 00:18:47.197 } 00:18:47.197 ] 00:18:47.197 }' 00:18:47.197 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.197 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.763 [2024-11-04 14:51:17.498199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.763 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.763 "name": "Existed_Raid", 00:18:47.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.763 "strip_size_kb": 0, 00:18:47.763 "state": "configuring", 00:18:47.763 "raid_level": "raid1", 00:18:47.763 "superblock": false, 00:18:47.763 "num_base_bdevs": 4, 00:18:47.763 "num_base_bdevs_discovered": 3, 00:18:47.763 "num_base_bdevs_operational": 4, 00:18:47.763 "base_bdevs_list": [ 00:18:47.763 { 00:18:47.763 "name": null, 00:18:47.763 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:47.763 "is_configured": false, 00:18:47.763 "data_offset": 0, 00:18:47.763 "data_size": 65536 00:18:47.763 }, 00:18:47.763 { 00:18:47.763 "name": "BaseBdev2", 00:18:47.763 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:47.763 "is_configured": true, 00:18:47.763 "data_offset": 0, 00:18:47.763 "data_size": 65536 00:18:47.763 }, 00:18:47.763 { 00:18:47.763 "name": "BaseBdev3", 00:18:47.764 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:47.764 "is_configured": true, 00:18:47.764 "data_offset": 0, 00:18:47.764 "data_size": 65536 00:18:47.764 }, 00:18:47.764 { 00:18:47.764 "name": "BaseBdev4", 00:18:47.764 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:47.764 "is_configured": true, 00:18:47.764 "data_offset": 0, 00:18:47.764 "data_size": 65536 00:18:47.764 } 00:18:47.764 ] 00:18:47.764 }' 00:18:47.764 14:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.764 14:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 22f23cf8-a09c-4557-b63d-b42c285fe5f3 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 [2024-11-04 14:51:18.193330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:48.366 [2024-11-04 14:51:18.193652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:48.366 [2024-11-04 14:51:18.193687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:48.366 [2024-11-04 14:51:18.194065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:48.366 [2024-11-04 14:51:18.194360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:48.366 [2024-11-04 14:51:18.194391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:48.366 NewBaseBdev 00:18:48.366 [2024-11-04 14:51:18.194717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.366 [ 00:18:48.366 { 00:18:48.366 "name": "NewBaseBdev", 00:18:48.366 "aliases": [ 00:18:48.366 "22f23cf8-a09c-4557-b63d-b42c285fe5f3" 00:18:48.366 ], 00:18:48.366 "product_name": "Malloc disk", 00:18:48.366 "block_size": 512, 00:18:48.366 "num_blocks": 65536, 00:18:48.366 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:48.366 "assigned_rate_limits": { 00:18:48.366 "rw_ios_per_sec": 0, 00:18:48.366 "rw_mbytes_per_sec": 0, 00:18:48.366 "r_mbytes_per_sec": 0, 00:18:48.366 "w_mbytes_per_sec": 0 00:18:48.366 }, 00:18:48.366 "claimed": true, 00:18:48.366 "claim_type": "exclusive_write", 00:18:48.366 "zoned": false, 00:18:48.366 "supported_io_types": { 00:18:48.366 "read": true, 00:18:48.366 "write": true, 00:18:48.366 "unmap": true, 00:18:48.366 "flush": true, 00:18:48.366 "reset": true, 00:18:48.366 "nvme_admin": false, 00:18:48.366 "nvme_io": false, 00:18:48.366 "nvme_io_md": false, 00:18:48.366 "write_zeroes": true, 00:18:48.366 "zcopy": true, 00:18:48.366 "get_zone_info": false, 00:18:48.366 "zone_management": false, 00:18:48.366 "zone_append": false, 00:18:48.366 "compare": false, 00:18:48.366 "compare_and_write": false, 00:18:48.366 "abort": true, 00:18:48.366 "seek_hole": false, 00:18:48.366 "seek_data": false, 00:18:48.366 "copy": true, 00:18:48.366 "nvme_iov_md": false 00:18:48.366 }, 00:18:48.366 "memory_domains": [ 00:18:48.366 { 00:18:48.366 "dma_device_id": "system", 00:18:48.366 "dma_device_type": 1 00:18:48.366 }, 00:18:48.366 { 00:18:48.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.366 "dma_device_type": 2 00:18:48.366 } 00:18:48.366 ], 00:18:48.366 "driver_specific": {} 00:18:48.366 } 00:18:48.366 ] 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.366 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.647 "name": "Existed_Raid", 00:18:48.647 "uuid": "d900dfa7-c730-43af-8acd-373493cdd079", 00:18:48.647 "strip_size_kb": 0, 00:18:48.647 "state": "online", 00:18:48.647 "raid_level": "raid1", 00:18:48.647 "superblock": false, 00:18:48.647 "num_base_bdevs": 4, 00:18:48.647 "num_base_bdevs_discovered": 4, 00:18:48.647 "num_base_bdevs_operational": 4, 00:18:48.647 "base_bdevs_list": [ 00:18:48.647 { 00:18:48.647 "name": "NewBaseBdev", 00:18:48.647 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:48.647 "is_configured": true, 00:18:48.647 "data_offset": 0, 00:18:48.647 "data_size": 65536 00:18:48.647 }, 00:18:48.647 { 00:18:48.647 "name": "BaseBdev2", 00:18:48.647 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:48.647 "is_configured": true, 00:18:48.647 "data_offset": 0, 00:18:48.647 "data_size": 65536 00:18:48.647 }, 00:18:48.647 { 00:18:48.647 "name": "BaseBdev3", 00:18:48.647 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:48.647 "is_configured": true, 00:18:48.647 "data_offset": 0, 00:18:48.647 "data_size": 65536 00:18:48.647 }, 00:18:48.647 { 00:18:48.647 "name": "BaseBdev4", 00:18:48.647 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:48.647 "is_configured": true, 00:18:48.647 "data_offset": 0, 00:18:48.647 "data_size": 65536 00:18:48.647 } 00:18:48.647 ] 00:18:48.647 }' 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.647 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.905 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.905 [2024-11-04 14:51:18.769982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.164 "name": "Existed_Raid", 00:18:49.164 "aliases": [ 00:18:49.164 "d900dfa7-c730-43af-8acd-373493cdd079" 00:18:49.164 ], 00:18:49.164 "product_name": "Raid Volume", 00:18:49.164 "block_size": 512, 00:18:49.164 "num_blocks": 65536, 00:18:49.164 "uuid": "d900dfa7-c730-43af-8acd-373493cdd079", 00:18:49.164 "assigned_rate_limits": { 00:18:49.164 "rw_ios_per_sec": 0, 00:18:49.164 "rw_mbytes_per_sec": 0, 00:18:49.164 "r_mbytes_per_sec": 0, 00:18:49.164 "w_mbytes_per_sec": 0 00:18:49.164 }, 00:18:49.164 "claimed": false, 00:18:49.164 "zoned": false, 00:18:49.164 "supported_io_types": { 00:18:49.164 "read": true, 00:18:49.164 "write": true, 00:18:49.164 "unmap": false, 00:18:49.164 "flush": false, 00:18:49.164 "reset": true, 00:18:49.164 "nvme_admin": false, 00:18:49.164 "nvme_io": false, 00:18:49.164 "nvme_io_md": false, 00:18:49.164 "write_zeroes": true, 00:18:49.164 "zcopy": false, 00:18:49.164 "get_zone_info": false, 00:18:49.164 "zone_management": false, 00:18:49.164 "zone_append": false, 00:18:49.164 "compare": false, 00:18:49.164 "compare_and_write": false, 00:18:49.164 "abort": false, 00:18:49.164 "seek_hole": false, 00:18:49.164 "seek_data": false, 00:18:49.164 "copy": false, 00:18:49.164 "nvme_iov_md": false 00:18:49.164 }, 00:18:49.164 "memory_domains": [ 00:18:49.164 { 00:18:49.164 "dma_device_id": "system", 00:18:49.164 "dma_device_type": 1 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.164 "dma_device_type": 2 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "system", 00:18:49.164 "dma_device_type": 1 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.164 "dma_device_type": 2 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "system", 00:18:49.164 "dma_device_type": 1 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.164 "dma_device_type": 2 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "system", 00:18:49.164 "dma_device_type": 1 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.164 "dma_device_type": 2 00:18:49.164 } 00:18:49.164 ], 00:18:49.164 "driver_specific": { 00:18:49.164 "raid": { 00:18:49.164 "uuid": "d900dfa7-c730-43af-8acd-373493cdd079", 00:18:49.164 "strip_size_kb": 0, 00:18:49.164 "state": "online", 00:18:49.164 "raid_level": "raid1", 00:18:49.164 "superblock": false, 00:18:49.164 "num_base_bdevs": 4, 00:18:49.164 "num_base_bdevs_discovered": 4, 00:18:49.164 "num_base_bdevs_operational": 4, 00:18:49.164 "base_bdevs_list": [ 00:18:49.164 { 00:18:49.164 "name": "NewBaseBdev", 00:18:49.164 "uuid": "22f23cf8-a09c-4557-b63d-b42c285fe5f3", 00:18:49.164 "is_configured": true, 00:18:49.164 "data_offset": 0, 00:18:49.164 "data_size": 65536 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "name": "BaseBdev2", 00:18:49.164 "uuid": "15054ff5-1f63-4bfb-b412-d830fe2bc020", 00:18:49.164 "is_configured": true, 00:18:49.164 "data_offset": 0, 00:18:49.164 "data_size": 65536 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "name": "BaseBdev3", 00:18:49.164 "uuid": "0b79d8f2-5471-49ce-ac9b-c32c545f85d6", 00:18:49.164 "is_configured": true, 00:18:49.164 "data_offset": 0, 00:18:49.164 "data_size": 65536 00:18:49.164 }, 00:18:49.164 { 00:18:49.164 "name": "BaseBdev4", 00:18:49.164 "uuid": "c881ee13-9c02-44a0-b73b-36e1f26dc4ba", 00:18:49.164 "is_configured": true, 00:18:49.164 "data_offset": 0, 00:18:49.164 "data_size": 65536 00:18:49.164 } 00:18:49.164 ] 00:18:49.164 } 00:18:49.164 } 00:18:49.164 }' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:49.164 BaseBdev2 00:18:49.164 BaseBdev3 00:18:49.164 BaseBdev4' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.164 14:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:49.165 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.165 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.165 14:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.165 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.424 [2024-11-04 14:51:19.157570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:49.424 [2024-11-04 14:51:19.157866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.424 [2024-11-04 14:51:19.157998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.424 [2024-11-04 14:51:19.158395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.424 [2024-11-04 14:51:19.158424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73477 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73477 ']' 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73477 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73477 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73477' 00:18:49.424 killing process with pid 73477 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73477 00:18:49.424 [2024-11-04 14:51:19.198150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:49.424 14:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73477 00:18:49.682 [2024-11-04 14:51:19.549169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.056 ************************************ 00:18:51.056 END TEST raid_state_function_test 00:18:51.056 ************************************ 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:51.056 00:18:51.056 real 0m12.746s 00:18:51.056 user 0m20.916s 00:18:51.056 sys 0m1.892s 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.056 14:51:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:51.056 14:51:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:51.056 14:51:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:51.056 14:51:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.056 ************************************ 00:18:51.056 START TEST raid_state_function_test_sb 00:18:51.056 ************************************ 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:51.056 Process raid pid: 74154 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74154 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74154' 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74154 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74154 ']' 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.056 14:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.056 [2024-11-04 14:51:20.820766] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:18:51.056 [2024-11-04 14:51:20.821127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.315 [2024-11-04 14:51:21.015950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.315 [2024-11-04 14:51:21.169074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.630 [2024-11-04 14:51:21.378638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.630 [2024-11-04 14:51:21.378694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.888 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.888 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:51.888 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:51.888 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.888 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.888 [2024-11-04 14:51:21.778717] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.888 [2024-11-04 14:51:21.779033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.888 [2024-11-04 14:51:21.779165] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.888 [2024-11-04 14:51:21.779328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:52.147 [2024-11-04 14:51:21.779447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:52.147 [2024-11-04 14:51:21.779619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:52.147 [2024-11-04 14:51:21.779742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:52.147 [2024-11-04 14:51:21.779812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.147 "name": "Existed_Raid", 00:18:52.147 "uuid": "876e064f-9f6a-47b5-9084-95e0da28f9e5", 00:18:52.147 "strip_size_kb": 0, 00:18:52.147 "state": "configuring", 00:18:52.147 "raid_level": "raid1", 00:18:52.147 "superblock": true, 00:18:52.147 "num_base_bdevs": 4, 00:18:52.147 "num_base_bdevs_discovered": 0, 00:18:52.147 "num_base_bdevs_operational": 4, 00:18:52.147 "base_bdevs_list": [ 00:18:52.147 { 00:18:52.147 "name": "BaseBdev1", 00:18:52.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.147 "is_configured": false, 00:18:52.147 "data_offset": 0, 00:18:52.147 "data_size": 0 00:18:52.147 }, 00:18:52.147 { 00:18:52.147 "name": "BaseBdev2", 00:18:52.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.147 "is_configured": false, 00:18:52.147 "data_offset": 0, 00:18:52.147 "data_size": 0 00:18:52.147 }, 00:18:52.147 { 00:18:52.147 "name": "BaseBdev3", 00:18:52.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.147 "is_configured": false, 00:18:52.147 "data_offset": 0, 00:18:52.147 "data_size": 0 00:18:52.147 }, 00:18:52.147 { 00:18:52.147 "name": "BaseBdev4", 00:18:52.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.147 "is_configured": false, 00:18:52.147 "data_offset": 0, 00:18:52.147 "data_size": 0 00:18:52.147 } 00:18:52.147 ] 00:18:52.147 }' 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.147 14:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.405 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:52.405 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.405 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.405 [2024-11-04 14:51:22.290835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.405 [2024-11-04 14:51:22.291134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:52.405 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.405 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:52.405 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.664 [2024-11-04 14:51:22.302803] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:52.664 [2024-11-04 14:51:22.302988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:52.664 [2024-11-04 14:51:22.303123] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:52.664 [2024-11-04 14:51:22.303274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:52.664 [2024-11-04 14:51:22.303394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:52.664 [2024-11-04 14:51:22.303455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:52.664 [2024-11-04 14:51:22.303559] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:52.664 [2024-11-04 14:51:22.303632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.664 [2024-11-04 14:51:22.348491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.664 BaseBdev1 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.664 [ 00:18:52.664 { 00:18:52.664 "name": "BaseBdev1", 00:18:52.664 "aliases": [ 00:18:52.664 "25dc7f4f-00b0-4e35-8b8a-3f8de89df306" 00:18:52.664 ], 00:18:52.664 "product_name": "Malloc disk", 00:18:52.664 "block_size": 512, 00:18:52.664 "num_blocks": 65536, 00:18:52.664 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:52.664 "assigned_rate_limits": { 00:18:52.664 "rw_ios_per_sec": 0, 00:18:52.664 "rw_mbytes_per_sec": 0, 00:18:52.664 "r_mbytes_per_sec": 0, 00:18:52.664 "w_mbytes_per_sec": 0 00:18:52.664 }, 00:18:52.664 "claimed": true, 00:18:52.664 "claim_type": "exclusive_write", 00:18:52.664 "zoned": false, 00:18:52.664 "supported_io_types": { 00:18:52.664 "read": true, 00:18:52.664 "write": true, 00:18:52.664 "unmap": true, 00:18:52.664 "flush": true, 00:18:52.664 "reset": true, 00:18:52.664 "nvme_admin": false, 00:18:52.664 "nvme_io": false, 00:18:52.664 "nvme_io_md": false, 00:18:52.664 "write_zeroes": true, 00:18:52.664 "zcopy": true, 00:18:52.664 "get_zone_info": false, 00:18:52.664 "zone_management": false, 00:18:52.664 "zone_append": false, 00:18:52.664 "compare": false, 00:18:52.664 "compare_and_write": false, 00:18:52.664 "abort": true, 00:18:52.664 "seek_hole": false, 00:18:52.664 "seek_data": false, 00:18:52.664 "copy": true, 00:18:52.664 "nvme_iov_md": false 00:18:52.664 }, 00:18:52.664 "memory_domains": [ 00:18:52.664 { 00:18:52.664 "dma_device_id": "system", 00:18:52.664 "dma_device_type": 1 00:18:52.664 }, 00:18:52.664 { 00:18:52.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.664 "dma_device_type": 2 00:18:52.664 } 00:18:52.664 ], 00:18:52.664 "driver_specific": {} 00:18:52.664 } 00:18:52.664 ] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.664 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.664 "name": "Existed_Raid", 00:18:52.664 "uuid": "1048cc55-9bf4-42e4-b915-80d54c202eb7", 00:18:52.664 "strip_size_kb": 0, 00:18:52.664 "state": "configuring", 00:18:52.664 "raid_level": "raid1", 00:18:52.664 "superblock": true, 00:18:52.664 "num_base_bdevs": 4, 00:18:52.664 "num_base_bdevs_discovered": 1, 00:18:52.664 "num_base_bdevs_operational": 4, 00:18:52.664 "base_bdevs_list": [ 00:18:52.664 { 00:18:52.664 "name": "BaseBdev1", 00:18:52.664 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:52.664 "is_configured": true, 00:18:52.664 "data_offset": 2048, 00:18:52.664 "data_size": 63488 00:18:52.664 }, 00:18:52.664 { 00:18:52.664 "name": "BaseBdev2", 00:18:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.665 "is_configured": false, 00:18:52.665 "data_offset": 0, 00:18:52.665 "data_size": 0 00:18:52.665 }, 00:18:52.665 { 00:18:52.665 "name": "BaseBdev3", 00:18:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.665 "is_configured": false, 00:18:52.665 "data_offset": 0, 00:18:52.665 "data_size": 0 00:18:52.665 }, 00:18:52.665 { 00:18:52.665 "name": "BaseBdev4", 00:18:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.665 "is_configured": false, 00:18:52.665 "data_offset": 0, 00:18:52.665 "data_size": 0 00:18:52.665 } 00:18:52.665 ] 00:18:52.665 }' 00:18:52.665 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.665 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.230 [2024-11-04 14:51:22.884760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.230 [2024-11-04 14:51:22.884843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.230 [2024-11-04 14:51:22.892768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.230 [2024-11-04 14:51:22.895503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.230 [2024-11-04 14:51:22.895698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.230 [2024-11-04 14:51:22.895820] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:53.230 [2024-11-04 14:51:22.895857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:53.230 [2024-11-04 14:51:22.895871] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:53.230 [2024-11-04 14:51:22.895885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:53.230 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.231 "name": "Existed_Raid", 00:18:53.231 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:53.231 "strip_size_kb": 0, 00:18:53.231 "state": "configuring", 00:18:53.231 "raid_level": "raid1", 00:18:53.231 "superblock": true, 00:18:53.231 "num_base_bdevs": 4, 00:18:53.231 "num_base_bdevs_discovered": 1, 00:18:53.231 "num_base_bdevs_operational": 4, 00:18:53.231 "base_bdevs_list": [ 00:18:53.231 { 00:18:53.231 "name": "BaseBdev1", 00:18:53.231 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:53.231 "is_configured": true, 00:18:53.231 "data_offset": 2048, 00:18:53.231 "data_size": 63488 00:18:53.231 }, 00:18:53.231 { 00:18:53.231 "name": "BaseBdev2", 00:18:53.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.231 "is_configured": false, 00:18:53.231 "data_offset": 0, 00:18:53.231 "data_size": 0 00:18:53.231 }, 00:18:53.231 { 00:18:53.231 "name": "BaseBdev3", 00:18:53.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.231 "is_configured": false, 00:18:53.231 "data_offset": 0, 00:18:53.231 "data_size": 0 00:18:53.231 }, 00:18:53.231 { 00:18:53.231 "name": "BaseBdev4", 00:18:53.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.231 "is_configured": false, 00:18:53.231 "data_offset": 0, 00:18:53.231 "data_size": 0 00:18:53.231 } 00:18:53.231 ] 00:18:53.231 }' 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.231 14:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.796 [2024-11-04 14:51:23.451663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.796 BaseBdev2 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.796 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.796 [ 00:18:53.796 { 00:18:53.796 "name": "BaseBdev2", 00:18:53.796 "aliases": [ 00:18:53.796 "af083f1f-ed8e-448a-b3bc-b67f6acaedd2" 00:18:53.796 ], 00:18:53.796 "product_name": "Malloc disk", 00:18:53.796 "block_size": 512, 00:18:53.796 "num_blocks": 65536, 00:18:53.796 "uuid": "af083f1f-ed8e-448a-b3bc-b67f6acaedd2", 00:18:53.796 "assigned_rate_limits": { 00:18:53.796 "rw_ios_per_sec": 0, 00:18:53.796 "rw_mbytes_per_sec": 0, 00:18:53.796 "r_mbytes_per_sec": 0, 00:18:53.796 "w_mbytes_per_sec": 0 00:18:53.796 }, 00:18:53.796 "claimed": true, 00:18:53.796 "claim_type": "exclusive_write", 00:18:53.796 "zoned": false, 00:18:53.796 "supported_io_types": { 00:18:53.796 "read": true, 00:18:53.796 "write": true, 00:18:53.796 "unmap": true, 00:18:53.796 "flush": true, 00:18:53.796 "reset": true, 00:18:53.796 "nvme_admin": false, 00:18:53.796 "nvme_io": false, 00:18:53.796 "nvme_io_md": false, 00:18:53.796 "write_zeroes": true, 00:18:53.796 "zcopy": true, 00:18:53.796 "get_zone_info": false, 00:18:53.796 "zone_management": false, 00:18:53.796 "zone_append": false, 00:18:53.796 "compare": false, 00:18:53.796 "compare_and_write": false, 00:18:53.796 "abort": true, 00:18:53.796 "seek_hole": false, 00:18:53.796 "seek_data": false, 00:18:53.796 "copy": true, 00:18:53.796 "nvme_iov_md": false 00:18:53.796 }, 00:18:53.796 "memory_domains": [ 00:18:53.796 { 00:18:53.796 "dma_device_id": "system", 00:18:53.796 "dma_device_type": 1 00:18:53.796 }, 00:18:53.796 { 00:18:53.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.797 "dma_device_type": 2 00:18:53.797 } 00:18:53.797 ], 00:18:53.797 "driver_specific": {} 00:18:53.797 } 00:18:53.797 ] 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.797 "name": "Existed_Raid", 00:18:53.797 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:53.797 "strip_size_kb": 0, 00:18:53.797 "state": "configuring", 00:18:53.797 "raid_level": "raid1", 00:18:53.797 "superblock": true, 00:18:53.797 "num_base_bdevs": 4, 00:18:53.797 "num_base_bdevs_discovered": 2, 00:18:53.797 "num_base_bdevs_operational": 4, 00:18:53.797 "base_bdevs_list": [ 00:18:53.797 { 00:18:53.797 "name": "BaseBdev1", 00:18:53.797 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:53.797 "is_configured": true, 00:18:53.797 "data_offset": 2048, 00:18:53.797 "data_size": 63488 00:18:53.797 }, 00:18:53.797 { 00:18:53.797 "name": "BaseBdev2", 00:18:53.797 "uuid": "af083f1f-ed8e-448a-b3bc-b67f6acaedd2", 00:18:53.797 "is_configured": true, 00:18:53.797 "data_offset": 2048, 00:18:53.797 "data_size": 63488 00:18:53.797 }, 00:18:53.797 { 00:18:53.797 "name": "BaseBdev3", 00:18:53.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.797 "is_configured": false, 00:18:53.797 "data_offset": 0, 00:18:53.797 "data_size": 0 00:18:53.797 }, 00:18:53.797 { 00:18:53.797 "name": "BaseBdev4", 00:18:53.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.797 "is_configured": false, 00:18:53.797 "data_offset": 0, 00:18:53.797 "data_size": 0 00:18:53.797 } 00:18:53.797 ] 00:18:53.797 }' 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.797 14:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.363 [2024-11-04 14:51:24.062937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.363 BaseBdev3 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.363 [ 00:18:54.363 { 00:18:54.363 "name": "BaseBdev3", 00:18:54.363 "aliases": [ 00:18:54.363 "5f7377aa-bc83-4fa9-8071-c5839706e790" 00:18:54.363 ], 00:18:54.363 "product_name": "Malloc disk", 00:18:54.363 "block_size": 512, 00:18:54.363 "num_blocks": 65536, 00:18:54.363 "uuid": "5f7377aa-bc83-4fa9-8071-c5839706e790", 00:18:54.363 "assigned_rate_limits": { 00:18:54.363 "rw_ios_per_sec": 0, 00:18:54.363 "rw_mbytes_per_sec": 0, 00:18:54.363 "r_mbytes_per_sec": 0, 00:18:54.363 "w_mbytes_per_sec": 0 00:18:54.363 }, 00:18:54.363 "claimed": true, 00:18:54.363 "claim_type": "exclusive_write", 00:18:54.363 "zoned": false, 00:18:54.363 "supported_io_types": { 00:18:54.363 "read": true, 00:18:54.363 "write": true, 00:18:54.363 "unmap": true, 00:18:54.363 "flush": true, 00:18:54.363 "reset": true, 00:18:54.363 "nvme_admin": false, 00:18:54.363 "nvme_io": false, 00:18:54.363 "nvme_io_md": false, 00:18:54.363 "write_zeroes": true, 00:18:54.363 "zcopy": true, 00:18:54.363 "get_zone_info": false, 00:18:54.363 "zone_management": false, 00:18:54.363 "zone_append": false, 00:18:54.363 "compare": false, 00:18:54.363 "compare_and_write": false, 00:18:54.363 "abort": true, 00:18:54.363 "seek_hole": false, 00:18:54.363 "seek_data": false, 00:18:54.363 "copy": true, 00:18:54.363 "nvme_iov_md": false 00:18:54.363 }, 00:18:54.363 "memory_domains": [ 00:18:54.363 { 00:18:54.363 "dma_device_id": "system", 00:18:54.363 "dma_device_type": 1 00:18:54.363 }, 00:18:54.363 { 00:18:54.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.363 "dma_device_type": 2 00:18:54.363 } 00:18:54.363 ], 00:18:54.363 "driver_specific": {} 00:18:54.363 } 00:18:54.363 ] 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.363 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.363 "name": "Existed_Raid", 00:18:54.363 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:54.363 "strip_size_kb": 0, 00:18:54.363 "state": "configuring", 00:18:54.363 "raid_level": "raid1", 00:18:54.363 "superblock": true, 00:18:54.363 "num_base_bdevs": 4, 00:18:54.363 "num_base_bdevs_discovered": 3, 00:18:54.363 "num_base_bdevs_operational": 4, 00:18:54.363 "base_bdevs_list": [ 00:18:54.363 { 00:18:54.363 "name": "BaseBdev1", 00:18:54.363 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:54.363 "is_configured": true, 00:18:54.363 "data_offset": 2048, 00:18:54.363 "data_size": 63488 00:18:54.363 }, 00:18:54.363 { 00:18:54.363 "name": "BaseBdev2", 00:18:54.363 "uuid": "af083f1f-ed8e-448a-b3bc-b67f6acaedd2", 00:18:54.363 "is_configured": true, 00:18:54.363 "data_offset": 2048, 00:18:54.363 "data_size": 63488 00:18:54.363 }, 00:18:54.363 { 00:18:54.363 "name": "BaseBdev3", 00:18:54.363 "uuid": "5f7377aa-bc83-4fa9-8071-c5839706e790", 00:18:54.363 "is_configured": true, 00:18:54.363 "data_offset": 2048, 00:18:54.363 "data_size": 63488 00:18:54.363 }, 00:18:54.363 { 00:18:54.363 "name": "BaseBdev4", 00:18:54.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.363 "is_configured": false, 00:18:54.363 "data_offset": 0, 00:18:54.364 "data_size": 0 00:18:54.364 } 00:18:54.364 ] 00:18:54.364 }' 00:18:54.364 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.364 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.928 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:54.928 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.929 [2024-11-04 14:51:24.621698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.929 BaseBdev4 00:18:54.929 [2024-11-04 14:51:24.622280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:54.929 [2024-11-04 14:51:24.622306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:54.929 [2024-11-04 14:51:24.622658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:54.929 [2024-11-04 14:51:24.622864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:54.929 [2024-11-04 14:51:24.622888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:54.929 [2024-11-04 14:51:24.623066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.929 [ 00:18:54.929 { 00:18:54.929 "name": "BaseBdev4", 00:18:54.929 "aliases": [ 00:18:54.929 "65a70f84-e40b-4bb9-9e8a-b1c131b7c4c9" 00:18:54.929 ], 00:18:54.929 "product_name": "Malloc disk", 00:18:54.929 "block_size": 512, 00:18:54.929 "num_blocks": 65536, 00:18:54.929 "uuid": "65a70f84-e40b-4bb9-9e8a-b1c131b7c4c9", 00:18:54.929 "assigned_rate_limits": { 00:18:54.929 "rw_ios_per_sec": 0, 00:18:54.929 "rw_mbytes_per_sec": 0, 00:18:54.929 "r_mbytes_per_sec": 0, 00:18:54.929 "w_mbytes_per_sec": 0 00:18:54.929 }, 00:18:54.929 "claimed": true, 00:18:54.929 "claim_type": "exclusive_write", 00:18:54.929 "zoned": false, 00:18:54.929 "supported_io_types": { 00:18:54.929 "read": true, 00:18:54.929 "write": true, 00:18:54.929 "unmap": true, 00:18:54.929 "flush": true, 00:18:54.929 "reset": true, 00:18:54.929 "nvme_admin": false, 00:18:54.929 "nvme_io": false, 00:18:54.929 "nvme_io_md": false, 00:18:54.929 "write_zeroes": true, 00:18:54.929 "zcopy": true, 00:18:54.929 "get_zone_info": false, 00:18:54.929 "zone_management": false, 00:18:54.929 "zone_append": false, 00:18:54.929 "compare": false, 00:18:54.929 "compare_and_write": false, 00:18:54.929 "abort": true, 00:18:54.929 "seek_hole": false, 00:18:54.929 "seek_data": false, 00:18:54.929 "copy": true, 00:18:54.929 "nvme_iov_md": false 00:18:54.929 }, 00:18:54.929 "memory_domains": [ 00:18:54.929 { 00:18:54.929 "dma_device_id": "system", 00:18:54.929 "dma_device_type": 1 00:18:54.929 }, 00:18:54.929 { 00:18:54.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.929 "dma_device_type": 2 00:18:54.929 } 00:18:54.929 ], 00:18:54.929 "driver_specific": {} 00:18:54.929 } 00:18:54.929 ] 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.929 "name": "Existed_Raid", 00:18:54.929 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:54.929 "strip_size_kb": 0, 00:18:54.929 "state": "online", 00:18:54.929 "raid_level": "raid1", 00:18:54.929 "superblock": true, 00:18:54.929 "num_base_bdevs": 4, 00:18:54.929 "num_base_bdevs_discovered": 4, 00:18:54.929 "num_base_bdevs_operational": 4, 00:18:54.929 "base_bdevs_list": [ 00:18:54.929 { 00:18:54.929 "name": "BaseBdev1", 00:18:54.929 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:54.929 "is_configured": true, 00:18:54.929 "data_offset": 2048, 00:18:54.929 "data_size": 63488 00:18:54.929 }, 00:18:54.929 { 00:18:54.929 "name": "BaseBdev2", 00:18:54.929 "uuid": "af083f1f-ed8e-448a-b3bc-b67f6acaedd2", 00:18:54.929 "is_configured": true, 00:18:54.929 "data_offset": 2048, 00:18:54.929 "data_size": 63488 00:18:54.929 }, 00:18:54.929 { 00:18:54.929 "name": "BaseBdev3", 00:18:54.929 "uuid": "5f7377aa-bc83-4fa9-8071-c5839706e790", 00:18:54.929 "is_configured": true, 00:18:54.929 "data_offset": 2048, 00:18:54.929 "data_size": 63488 00:18:54.929 }, 00:18:54.929 { 00:18:54.929 "name": "BaseBdev4", 00:18:54.929 "uuid": "65a70f84-e40b-4bb9-9e8a-b1c131b7c4c9", 00:18:54.929 "is_configured": true, 00:18:54.929 "data_offset": 2048, 00:18:54.929 "data_size": 63488 00:18:54.929 } 00:18:54.929 ] 00:18:54.929 }' 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.929 14:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.494 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.495 [2024-11-04 14:51:25.206429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.495 "name": "Existed_Raid", 00:18:55.495 "aliases": [ 00:18:55.495 "3f6744c7-ff81-46de-a68c-441a8871360f" 00:18:55.495 ], 00:18:55.495 "product_name": "Raid Volume", 00:18:55.495 "block_size": 512, 00:18:55.495 "num_blocks": 63488, 00:18:55.495 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:55.495 "assigned_rate_limits": { 00:18:55.495 "rw_ios_per_sec": 0, 00:18:55.495 "rw_mbytes_per_sec": 0, 00:18:55.495 "r_mbytes_per_sec": 0, 00:18:55.495 "w_mbytes_per_sec": 0 00:18:55.495 }, 00:18:55.495 "claimed": false, 00:18:55.495 "zoned": false, 00:18:55.495 "supported_io_types": { 00:18:55.495 "read": true, 00:18:55.495 "write": true, 00:18:55.495 "unmap": false, 00:18:55.495 "flush": false, 00:18:55.495 "reset": true, 00:18:55.495 "nvme_admin": false, 00:18:55.495 "nvme_io": false, 00:18:55.495 "nvme_io_md": false, 00:18:55.495 "write_zeroes": true, 00:18:55.495 "zcopy": false, 00:18:55.495 "get_zone_info": false, 00:18:55.495 "zone_management": false, 00:18:55.495 "zone_append": false, 00:18:55.495 "compare": false, 00:18:55.495 "compare_and_write": false, 00:18:55.495 "abort": false, 00:18:55.495 "seek_hole": false, 00:18:55.495 "seek_data": false, 00:18:55.495 "copy": false, 00:18:55.495 "nvme_iov_md": false 00:18:55.495 }, 00:18:55.495 "memory_domains": [ 00:18:55.495 { 00:18:55.495 "dma_device_id": "system", 00:18:55.495 "dma_device_type": 1 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.495 "dma_device_type": 2 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "system", 00:18:55.495 "dma_device_type": 1 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.495 "dma_device_type": 2 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "system", 00:18:55.495 "dma_device_type": 1 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.495 "dma_device_type": 2 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "system", 00:18:55.495 "dma_device_type": 1 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.495 "dma_device_type": 2 00:18:55.495 } 00:18:55.495 ], 00:18:55.495 "driver_specific": { 00:18:55.495 "raid": { 00:18:55.495 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:55.495 "strip_size_kb": 0, 00:18:55.495 "state": "online", 00:18:55.495 "raid_level": "raid1", 00:18:55.495 "superblock": true, 00:18:55.495 "num_base_bdevs": 4, 00:18:55.495 "num_base_bdevs_discovered": 4, 00:18:55.495 "num_base_bdevs_operational": 4, 00:18:55.495 "base_bdevs_list": [ 00:18:55.495 { 00:18:55.495 "name": "BaseBdev1", 00:18:55.495 "uuid": "25dc7f4f-00b0-4e35-8b8a-3f8de89df306", 00:18:55.495 "is_configured": true, 00:18:55.495 "data_offset": 2048, 00:18:55.495 "data_size": 63488 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "name": "BaseBdev2", 00:18:55.495 "uuid": "af083f1f-ed8e-448a-b3bc-b67f6acaedd2", 00:18:55.495 "is_configured": true, 00:18:55.495 "data_offset": 2048, 00:18:55.495 "data_size": 63488 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "name": "BaseBdev3", 00:18:55.495 "uuid": "5f7377aa-bc83-4fa9-8071-c5839706e790", 00:18:55.495 "is_configured": true, 00:18:55.495 "data_offset": 2048, 00:18:55.495 "data_size": 63488 00:18:55.495 }, 00:18:55.495 { 00:18:55.495 "name": "BaseBdev4", 00:18:55.495 "uuid": "65a70f84-e40b-4bb9-9e8a-b1c131b7c4c9", 00:18:55.495 "is_configured": true, 00:18:55.495 "data_offset": 2048, 00:18:55.495 "data_size": 63488 00:18:55.495 } 00:18:55.495 ] 00:18:55.495 } 00:18:55.495 } 00:18:55.495 }' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:55.495 BaseBdev2 00:18:55.495 BaseBdev3 00:18:55.495 BaseBdev4' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.495 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.753 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.753 [2024-11-04 14:51:25.578066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.012 "name": "Existed_Raid", 00:18:56.012 "uuid": "3f6744c7-ff81-46de-a68c-441a8871360f", 00:18:56.012 "strip_size_kb": 0, 00:18:56.012 "state": "online", 00:18:56.012 "raid_level": "raid1", 00:18:56.012 "superblock": true, 00:18:56.012 "num_base_bdevs": 4, 00:18:56.012 "num_base_bdevs_discovered": 3, 00:18:56.012 "num_base_bdevs_operational": 3, 00:18:56.012 "base_bdevs_list": [ 00:18:56.012 { 00:18:56.012 "name": null, 00:18:56.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.012 "is_configured": false, 00:18:56.012 "data_offset": 0, 00:18:56.012 "data_size": 63488 00:18:56.012 }, 00:18:56.012 { 00:18:56.012 "name": "BaseBdev2", 00:18:56.012 "uuid": "af083f1f-ed8e-448a-b3bc-b67f6acaedd2", 00:18:56.012 "is_configured": true, 00:18:56.012 "data_offset": 2048, 00:18:56.012 "data_size": 63488 00:18:56.012 }, 00:18:56.012 { 00:18:56.012 "name": "BaseBdev3", 00:18:56.012 "uuid": "5f7377aa-bc83-4fa9-8071-c5839706e790", 00:18:56.012 "is_configured": true, 00:18:56.012 "data_offset": 2048, 00:18:56.012 "data_size": 63488 00:18:56.012 }, 00:18:56.012 { 00:18:56.012 "name": "BaseBdev4", 00:18:56.012 "uuid": "65a70f84-e40b-4bb9-9e8a-b1c131b7c4c9", 00:18:56.012 "is_configured": true, 00:18:56.012 "data_offset": 2048, 00:18:56.012 "data_size": 63488 00:18:56.012 } 00:18:56.012 ] 00:18:56.012 }' 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.012 14:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.270 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:56.270 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.270 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.270 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.270 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.270 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.528 [2024-11-04 14:51:26.202959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.528 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.529 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.529 [2024-11-04 14:51:26.347100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.787 [2024-11-04 14:51:26.491220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:56.787 [2024-11-04 14:51:26.491586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.787 [2024-11-04 14:51:26.578303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.787 [2024-11-04 14:51:26.578566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.787 [2024-11-04 14:51:26.578601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.787 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.788 BaseBdev2 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.788 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.046 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.046 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:57.046 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.046 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.046 [ 00:18:57.046 { 00:18:57.046 "name": "BaseBdev2", 00:18:57.046 "aliases": [ 00:18:57.047 "13683efb-75ee-40d3-882c-dba318a395aa" 00:18:57.047 ], 00:18:57.047 "product_name": "Malloc disk", 00:18:57.047 "block_size": 512, 00:18:57.047 "num_blocks": 65536, 00:18:57.047 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:18:57.047 "assigned_rate_limits": { 00:18:57.047 "rw_ios_per_sec": 0, 00:18:57.047 "rw_mbytes_per_sec": 0, 00:18:57.047 "r_mbytes_per_sec": 0, 00:18:57.047 "w_mbytes_per_sec": 0 00:18:57.047 }, 00:18:57.047 "claimed": false, 00:18:57.047 "zoned": false, 00:18:57.047 "supported_io_types": { 00:18:57.047 "read": true, 00:18:57.047 "write": true, 00:18:57.047 "unmap": true, 00:18:57.047 "flush": true, 00:18:57.047 "reset": true, 00:18:57.047 "nvme_admin": false, 00:18:57.047 "nvme_io": false, 00:18:57.047 "nvme_io_md": false, 00:18:57.047 "write_zeroes": true, 00:18:57.047 "zcopy": true, 00:18:57.047 "get_zone_info": false, 00:18:57.047 "zone_management": false, 00:18:57.047 "zone_append": false, 00:18:57.047 "compare": false, 00:18:57.047 "compare_and_write": false, 00:18:57.047 "abort": true, 00:18:57.047 "seek_hole": false, 00:18:57.047 "seek_data": false, 00:18:57.047 "copy": true, 00:18:57.047 "nvme_iov_md": false 00:18:57.047 }, 00:18:57.047 "memory_domains": [ 00:18:57.047 { 00:18:57.047 "dma_device_id": "system", 00:18:57.047 "dma_device_type": 1 00:18:57.047 }, 00:18:57.047 { 00:18:57.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.047 "dma_device_type": 2 00:18:57.047 } 00:18:57.047 ], 00:18:57.047 "driver_specific": {} 00:18:57.047 } 00:18:57.047 ] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 BaseBdev3 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 [ 00:18:57.047 { 00:18:57.047 "name": "BaseBdev3", 00:18:57.047 "aliases": [ 00:18:57.047 "292408e8-614c-48fb-8622-be615376641c" 00:18:57.047 ], 00:18:57.047 "product_name": "Malloc disk", 00:18:57.047 "block_size": 512, 00:18:57.047 "num_blocks": 65536, 00:18:57.047 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:18:57.047 "assigned_rate_limits": { 00:18:57.047 "rw_ios_per_sec": 0, 00:18:57.047 "rw_mbytes_per_sec": 0, 00:18:57.047 "r_mbytes_per_sec": 0, 00:18:57.047 "w_mbytes_per_sec": 0 00:18:57.047 }, 00:18:57.047 "claimed": false, 00:18:57.047 "zoned": false, 00:18:57.047 "supported_io_types": { 00:18:57.047 "read": true, 00:18:57.047 "write": true, 00:18:57.047 "unmap": true, 00:18:57.047 "flush": true, 00:18:57.047 "reset": true, 00:18:57.047 "nvme_admin": false, 00:18:57.047 "nvme_io": false, 00:18:57.047 "nvme_io_md": false, 00:18:57.047 "write_zeroes": true, 00:18:57.047 "zcopy": true, 00:18:57.047 "get_zone_info": false, 00:18:57.047 "zone_management": false, 00:18:57.047 "zone_append": false, 00:18:57.047 "compare": false, 00:18:57.047 "compare_and_write": false, 00:18:57.047 "abort": true, 00:18:57.047 "seek_hole": false, 00:18:57.047 "seek_data": false, 00:18:57.047 "copy": true, 00:18:57.047 "nvme_iov_md": false 00:18:57.047 }, 00:18:57.047 "memory_domains": [ 00:18:57.047 { 00:18:57.047 "dma_device_id": "system", 00:18:57.047 "dma_device_type": 1 00:18:57.047 }, 00:18:57.047 { 00:18:57.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.047 "dma_device_type": 2 00:18:57.047 } 00:18:57.047 ], 00:18:57.047 "driver_specific": {} 00:18:57.047 } 00:18:57.047 ] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 BaseBdev4 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 [ 00:18:57.047 { 00:18:57.047 "name": "BaseBdev4", 00:18:57.047 "aliases": [ 00:18:57.047 "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03" 00:18:57.047 ], 00:18:57.047 "product_name": "Malloc disk", 00:18:57.047 "block_size": 512, 00:18:57.047 "num_blocks": 65536, 00:18:57.047 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:18:57.047 "assigned_rate_limits": { 00:18:57.047 "rw_ios_per_sec": 0, 00:18:57.047 "rw_mbytes_per_sec": 0, 00:18:57.047 "r_mbytes_per_sec": 0, 00:18:57.047 "w_mbytes_per_sec": 0 00:18:57.047 }, 00:18:57.047 "claimed": false, 00:18:57.047 "zoned": false, 00:18:57.047 "supported_io_types": { 00:18:57.047 "read": true, 00:18:57.047 "write": true, 00:18:57.047 "unmap": true, 00:18:57.047 "flush": true, 00:18:57.047 "reset": true, 00:18:57.047 "nvme_admin": false, 00:18:57.047 "nvme_io": false, 00:18:57.047 "nvme_io_md": false, 00:18:57.047 "write_zeroes": true, 00:18:57.047 "zcopy": true, 00:18:57.047 "get_zone_info": false, 00:18:57.047 "zone_management": false, 00:18:57.047 "zone_append": false, 00:18:57.047 "compare": false, 00:18:57.047 "compare_and_write": false, 00:18:57.047 "abort": true, 00:18:57.047 "seek_hole": false, 00:18:57.047 "seek_data": false, 00:18:57.047 "copy": true, 00:18:57.047 "nvme_iov_md": false 00:18:57.047 }, 00:18:57.047 "memory_domains": [ 00:18:57.047 { 00:18:57.047 "dma_device_id": "system", 00:18:57.047 "dma_device_type": 1 00:18:57.047 }, 00:18:57.047 { 00:18:57.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.047 "dma_device_type": 2 00:18:57.047 } 00:18:57.047 ], 00:18:57.047 "driver_specific": {} 00:18:57.047 } 00:18:57.047 ] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.047 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.047 [2024-11-04 14:51:26.853669] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.047 [2024-11-04 14:51:26.853741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.048 [2024-11-04 14:51:26.853769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.048 [2024-11-04 14:51:26.856335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:57.048 [2024-11-04 14:51:26.856447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.048 "name": "Existed_Raid", 00:18:57.048 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:18:57.048 "strip_size_kb": 0, 00:18:57.048 "state": "configuring", 00:18:57.048 "raid_level": "raid1", 00:18:57.048 "superblock": true, 00:18:57.048 "num_base_bdevs": 4, 00:18:57.048 "num_base_bdevs_discovered": 3, 00:18:57.048 "num_base_bdevs_operational": 4, 00:18:57.048 "base_bdevs_list": [ 00:18:57.048 { 00:18:57.048 "name": "BaseBdev1", 00:18:57.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.048 "is_configured": false, 00:18:57.048 "data_offset": 0, 00:18:57.048 "data_size": 0 00:18:57.048 }, 00:18:57.048 { 00:18:57.048 "name": "BaseBdev2", 00:18:57.048 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:18:57.048 "is_configured": true, 00:18:57.048 "data_offset": 2048, 00:18:57.048 "data_size": 63488 00:18:57.048 }, 00:18:57.048 { 00:18:57.048 "name": "BaseBdev3", 00:18:57.048 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:18:57.048 "is_configured": true, 00:18:57.048 "data_offset": 2048, 00:18:57.048 "data_size": 63488 00:18:57.048 }, 00:18:57.048 { 00:18:57.048 "name": "BaseBdev4", 00:18:57.048 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:18:57.048 "is_configured": true, 00:18:57.048 "data_offset": 2048, 00:18:57.048 "data_size": 63488 00:18:57.048 } 00:18:57.048 ] 00:18:57.048 }' 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.048 14:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.615 [2024-11-04 14:51:27.357864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.615 "name": "Existed_Raid", 00:18:57.615 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:18:57.615 "strip_size_kb": 0, 00:18:57.615 "state": "configuring", 00:18:57.615 "raid_level": "raid1", 00:18:57.615 "superblock": true, 00:18:57.615 "num_base_bdevs": 4, 00:18:57.615 "num_base_bdevs_discovered": 2, 00:18:57.615 "num_base_bdevs_operational": 4, 00:18:57.615 "base_bdevs_list": [ 00:18:57.615 { 00:18:57.615 "name": "BaseBdev1", 00:18:57.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.615 "is_configured": false, 00:18:57.615 "data_offset": 0, 00:18:57.615 "data_size": 0 00:18:57.615 }, 00:18:57.615 { 00:18:57.615 "name": null, 00:18:57.615 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:18:57.615 "is_configured": false, 00:18:57.615 "data_offset": 0, 00:18:57.615 "data_size": 63488 00:18:57.615 }, 00:18:57.615 { 00:18:57.615 "name": "BaseBdev3", 00:18:57.615 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:18:57.615 "is_configured": true, 00:18:57.615 "data_offset": 2048, 00:18:57.615 "data_size": 63488 00:18:57.615 }, 00:18:57.615 { 00:18:57.615 "name": "BaseBdev4", 00:18:57.615 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:18:57.615 "is_configured": true, 00:18:57.615 "data_offset": 2048, 00:18:57.615 "data_size": 63488 00:18:57.615 } 00:18:57.615 ] 00:18:57.615 }' 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.615 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 [2024-11-04 14:51:27.940015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.186 BaseBdev1 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 [ 00:18:58.186 { 00:18:58.186 "name": "BaseBdev1", 00:18:58.186 "aliases": [ 00:18:58.186 "8bc4158f-a69a-40a5-a405-eade6e6a58e7" 00:18:58.186 ], 00:18:58.186 "product_name": "Malloc disk", 00:18:58.186 "block_size": 512, 00:18:58.186 "num_blocks": 65536, 00:18:58.186 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:18:58.186 "assigned_rate_limits": { 00:18:58.186 "rw_ios_per_sec": 0, 00:18:58.186 "rw_mbytes_per_sec": 0, 00:18:58.186 "r_mbytes_per_sec": 0, 00:18:58.186 "w_mbytes_per_sec": 0 00:18:58.186 }, 00:18:58.186 "claimed": true, 00:18:58.186 "claim_type": "exclusive_write", 00:18:58.186 "zoned": false, 00:18:58.186 "supported_io_types": { 00:18:58.186 "read": true, 00:18:58.186 "write": true, 00:18:58.186 "unmap": true, 00:18:58.186 "flush": true, 00:18:58.186 "reset": true, 00:18:58.186 "nvme_admin": false, 00:18:58.186 "nvme_io": false, 00:18:58.186 "nvme_io_md": false, 00:18:58.186 "write_zeroes": true, 00:18:58.186 "zcopy": true, 00:18:58.186 "get_zone_info": false, 00:18:58.186 "zone_management": false, 00:18:58.186 "zone_append": false, 00:18:58.186 "compare": false, 00:18:58.186 "compare_and_write": false, 00:18:58.186 "abort": true, 00:18:58.186 "seek_hole": false, 00:18:58.186 "seek_data": false, 00:18:58.186 "copy": true, 00:18:58.186 "nvme_iov_md": false 00:18:58.186 }, 00:18:58.186 "memory_domains": [ 00:18:58.186 { 00:18:58.186 "dma_device_id": "system", 00:18:58.186 "dma_device_type": 1 00:18:58.186 }, 00:18:58.186 { 00:18:58.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.186 "dma_device_type": 2 00:18:58.186 } 00:18:58.186 ], 00:18:58.186 "driver_specific": {} 00:18:58.186 } 00:18:58.186 ] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.186 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.187 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.187 14:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.187 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.187 14:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.187 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.187 "name": "Existed_Raid", 00:18:58.187 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:18:58.187 "strip_size_kb": 0, 00:18:58.187 "state": "configuring", 00:18:58.187 "raid_level": "raid1", 00:18:58.187 "superblock": true, 00:18:58.187 "num_base_bdevs": 4, 00:18:58.187 "num_base_bdevs_discovered": 3, 00:18:58.187 "num_base_bdevs_operational": 4, 00:18:58.187 "base_bdevs_list": [ 00:18:58.187 { 00:18:58.187 "name": "BaseBdev1", 00:18:58.187 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:18:58.187 "is_configured": true, 00:18:58.187 "data_offset": 2048, 00:18:58.187 "data_size": 63488 00:18:58.187 }, 00:18:58.187 { 00:18:58.187 "name": null, 00:18:58.187 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:18:58.187 "is_configured": false, 00:18:58.187 "data_offset": 0, 00:18:58.187 "data_size": 63488 00:18:58.187 }, 00:18:58.187 { 00:18:58.187 "name": "BaseBdev3", 00:18:58.187 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:18:58.187 "is_configured": true, 00:18:58.187 "data_offset": 2048, 00:18:58.187 "data_size": 63488 00:18:58.187 }, 00:18:58.187 { 00:18:58.187 "name": "BaseBdev4", 00:18:58.187 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:18:58.187 "is_configured": true, 00:18:58.187 "data_offset": 2048, 00:18:58.187 "data_size": 63488 00:18:58.187 } 00:18:58.187 ] 00:18:58.187 }' 00:18:58.187 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.187 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.754 [2024-11-04 14:51:28.540272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.754 "name": "Existed_Raid", 00:18:58.754 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:18:58.754 "strip_size_kb": 0, 00:18:58.754 "state": "configuring", 00:18:58.754 "raid_level": "raid1", 00:18:58.754 "superblock": true, 00:18:58.754 "num_base_bdevs": 4, 00:18:58.754 "num_base_bdevs_discovered": 2, 00:18:58.754 "num_base_bdevs_operational": 4, 00:18:58.754 "base_bdevs_list": [ 00:18:58.754 { 00:18:58.754 "name": "BaseBdev1", 00:18:58.754 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:18:58.754 "is_configured": true, 00:18:58.754 "data_offset": 2048, 00:18:58.754 "data_size": 63488 00:18:58.754 }, 00:18:58.754 { 00:18:58.754 "name": null, 00:18:58.754 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:18:58.754 "is_configured": false, 00:18:58.754 "data_offset": 0, 00:18:58.754 "data_size": 63488 00:18:58.754 }, 00:18:58.754 { 00:18:58.754 "name": null, 00:18:58.754 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:18:58.754 "is_configured": false, 00:18:58.754 "data_offset": 0, 00:18:58.754 "data_size": 63488 00:18:58.754 }, 00:18:58.754 { 00:18:58.754 "name": "BaseBdev4", 00:18:58.754 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:18:58.754 "is_configured": true, 00:18:58.754 "data_offset": 2048, 00:18:58.754 "data_size": 63488 00:18:58.754 } 00:18:58.754 ] 00:18:58.754 }' 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.754 14:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 [2024-11-04 14:51:29.148404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.321 "name": "Existed_Raid", 00:18:59.321 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:18:59.321 "strip_size_kb": 0, 00:18:59.321 "state": "configuring", 00:18:59.321 "raid_level": "raid1", 00:18:59.321 "superblock": true, 00:18:59.321 "num_base_bdevs": 4, 00:18:59.321 "num_base_bdevs_discovered": 3, 00:18:59.321 "num_base_bdevs_operational": 4, 00:18:59.321 "base_bdevs_list": [ 00:18:59.321 { 00:18:59.321 "name": "BaseBdev1", 00:18:59.321 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:18:59.321 "is_configured": true, 00:18:59.321 "data_offset": 2048, 00:18:59.321 "data_size": 63488 00:18:59.321 }, 00:18:59.321 { 00:18:59.321 "name": null, 00:18:59.321 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:18:59.321 "is_configured": false, 00:18:59.321 "data_offset": 0, 00:18:59.321 "data_size": 63488 00:18:59.321 }, 00:18:59.321 { 00:18:59.321 "name": "BaseBdev3", 00:18:59.321 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:18:59.321 "is_configured": true, 00:18:59.321 "data_offset": 2048, 00:18:59.321 "data_size": 63488 00:18:59.321 }, 00:18:59.321 { 00:18:59.321 "name": "BaseBdev4", 00:18:59.321 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:18:59.321 "is_configured": true, 00:18:59.321 "data_offset": 2048, 00:18:59.321 "data_size": 63488 00:18:59.321 } 00:18:59.321 ] 00:18:59.321 }' 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.321 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.891 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.891 [2024-11-04 14:51:29.716612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.148 "name": "Existed_Raid", 00:19:00.148 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:19:00.148 "strip_size_kb": 0, 00:19:00.148 "state": "configuring", 00:19:00.148 "raid_level": "raid1", 00:19:00.148 "superblock": true, 00:19:00.148 "num_base_bdevs": 4, 00:19:00.148 "num_base_bdevs_discovered": 2, 00:19:00.148 "num_base_bdevs_operational": 4, 00:19:00.148 "base_bdevs_list": [ 00:19:00.148 { 00:19:00.148 "name": null, 00:19:00.148 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:19:00.148 "is_configured": false, 00:19:00.148 "data_offset": 0, 00:19:00.148 "data_size": 63488 00:19:00.148 }, 00:19:00.148 { 00:19:00.148 "name": null, 00:19:00.148 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:19:00.148 "is_configured": false, 00:19:00.148 "data_offset": 0, 00:19:00.148 "data_size": 63488 00:19:00.148 }, 00:19:00.148 { 00:19:00.148 "name": "BaseBdev3", 00:19:00.148 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:19:00.148 "is_configured": true, 00:19:00.148 "data_offset": 2048, 00:19:00.148 "data_size": 63488 00:19:00.148 }, 00:19:00.148 { 00:19:00.148 "name": "BaseBdev4", 00:19:00.148 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:19:00.148 "is_configured": true, 00:19:00.148 "data_offset": 2048, 00:19:00.148 "data_size": 63488 00:19:00.148 } 00:19:00.148 ] 00:19:00.148 }' 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.148 14:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.714 [2024-11-04 14:51:30.363451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.714 "name": "Existed_Raid", 00:19:00.714 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:19:00.714 "strip_size_kb": 0, 00:19:00.714 "state": "configuring", 00:19:00.714 "raid_level": "raid1", 00:19:00.714 "superblock": true, 00:19:00.714 "num_base_bdevs": 4, 00:19:00.714 "num_base_bdevs_discovered": 3, 00:19:00.714 "num_base_bdevs_operational": 4, 00:19:00.714 "base_bdevs_list": [ 00:19:00.714 { 00:19:00.714 "name": null, 00:19:00.714 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:19:00.714 "is_configured": false, 00:19:00.714 "data_offset": 0, 00:19:00.714 "data_size": 63488 00:19:00.714 }, 00:19:00.714 { 00:19:00.714 "name": "BaseBdev2", 00:19:00.714 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:19:00.714 "is_configured": true, 00:19:00.714 "data_offset": 2048, 00:19:00.714 "data_size": 63488 00:19:00.714 }, 00:19:00.714 { 00:19:00.714 "name": "BaseBdev3", 00:19:00.714 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:19:00.714 "is_configured": true, 00:19:00.714 "data_offset": 2048, 00:19:00.714 "data_size": 63488 00:19:00.714 }, 00:19:00.714 { 00:19:00.714 "name": "BaseBdev4", 00:19:00.714 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:19:00.714 "is_configured": true, 00:19:00.714 "data_offset": 2048, 00:19:00.714 "data_size": 63488 00:19:00.714 } 00:19:00.714 ] 00:19:00.714 }' 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.714 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8bc4158f-a69a-40a5-a405-eade6e6a58e7 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.296 14:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.296 [2024-11-04 14:51:31.029902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:01.296 NewBaseBdev 00:19:01.296 [2024-11-04 14:51:31.030468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:01.296 [2024-11-04 14:51:31.030500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:01.296 [2024-11-04 14:51:31.030839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:01.297 [2024-11-04 14:51:31.031038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:01.297 [2024-11-04 14:51:31.031054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.297 [2024-11-04 14:51:31.031218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.297 [ 00:19:01.297 { 00:19:01.297 "name": "NewBaseBdev", 00:19:01.297 "aliases": [ 00:19:01.297 "8bc4158f-a69a-40a5-a405-eade6e6a58e7" 00:19:01.297 ], 00:19:01.297 "product_name": "Malloc disk", 00:19:01.297 "block_size": 512, 00:19:01.297 "num_blocks": 65536, 00:19:01.297 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:19:01.297 "assigned_rate_limits": { 00:19:01.297 "rw_ios_per_sec": 0, 00:19:01.297 "rw_mbytes_per_sec": 0, 00:19:01.297 "r_mbytes_per_sec": 0, 00:19:01.297 "w_mbytes_per_sec": 0 00:19:01.297 }, 00:19:01.297 "claimed": true, 00:19:01.297 "claim_type": "exclusive_write", 00:19:01.297 "zoned": false, 00:19:01.297 "supported_io_types": { 00:19:01.297 "read": true, 00:19:01.297 "write": true, 00:19:01.297 "unmap": true, 00:19:01.297 "flush": true, 00:19:01.297 "reset": true, 00:19:01.297 "nvme_admin": false, 00:19:01.297 "nvme_io": false, 00:19:01.297 "nvme_io_md": false, 00:19:01.297 "write_zeroes": true, 00:19:01.297 "zcopy": true, 00:19:01.297 "get_zone_info": false, 00:19:01.297 "zone_management": false, 00:19:01.297 "zone_append": false, 00:19:01.297 "compare": false, 00:19:01.297 "compare_and_write": false, 00:19:01.297 "abort": true, 00:19:01.297 "seek_hole": false, 00:19:01.297 "seek_data": false, 00:19:01.297 "copy": true, 00:19:01.297 "nvme_iov_md": false 00:19:01.297 }, 00:19:01.297 "memory_domains": [ 00:19:01.297 { 00:19:01.297 "dma_device_id": "system", 00:19:01.297 "dma_device_type": 1 00:19:01.297 }, 00:19:01.297 { 00:19:01.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.297 "dma_device_type": 2 00:19:01.297 } 00:19:01.297 ], 00:19:01.297 "driver_specific": {} 00:19:01.297 } 00:19:01.297 ] 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.297 "name": "Existed_Raid", 00:19:01.297 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:19:01.297 "strip_size_kb": 0, 00:19:01.297 "state": "online", 00:19:01.297 "raid_level": "raid1", 00:19:01.297 "superblock": true, 00:19:01.297 "num_base_bdevs": 4, 00:19:01.297 "num_base_bdevs_discovered": 4, 00:19:01.297 "num_base_bdevs_operational": 4, 00:19:01.297 "base_bdevs_list": [ 00:19:01.297 { 00:19:01.297 "name": "NewBaseBdev", 00:19:01.297 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:19:01.297 "is_configured": true, 00:19:01.297 "data_offset": 2048, 00:19:01.297 "data_size": 63488 00:19:01.297 }, 00:19:01.297 { 00:19:01.297 "name": "BaseBdev2", 00:19:01.297 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:19:01.297 "is_configured": true, 00:19:01.297 "data_offset": 2048, 00:19:01.297 "data_size": 63488 00:19:01.297 }, 00:19:01.297 { 00:19:01.297 "name": "BaseBdev3", 00:19:01.297 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:19:01.297 "is_configured": true, 00:19:01.297 "data_offset": 2048, 00:19:01.297 "data_size": 63488 00:19:01.297 }, 00:19:01.297 { 00:19:01.297 "name": "BaseBdev4", 00:19:01.297 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:19:01.297 "is_configured": true, 00:19:01.297 "data_offset": 2048, 00:19:01.297 "data_size": 63488 00:19:01.297 } 00:19:01.297 ] 00:19:01.297 }' 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.297 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.882 [2024-11-04 14:51:31.598641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.882 "name": "Existed_Raid", 00:19:01.882 "aliases": [ 00:19:01.882 "a29493dc-b8ba-46de-8bb0-c30bf32983dd" 00:19:01.882 ], 00:19:01.882 "product_name": "Raid Volume", 00:19:01.882 "block_size": 512, 00:19:01.882 "num_blocks": 63488, 00:19:01.882 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:19:01.882 "assigned_rate_limits": { 00:19:01.882 "rw_ios_per_sec": 0, 00:19:01.882 "rw_mbytes_per_sec": 0, 00:19:01.882 "r_mbytes_per_sec": 0, 00:19:01.882 "w_mbytes_per_sec": 0 00:19:01.882 }, 00:19:01.882 "claimed": false, 00:19:01.882 "zoned": false, 00:19:01.882 "supported_io_types": { 00:19:01.882 "read": true, 00:19:01.882 "write": true, 00:19:01.882 "unmap": false, 00:19:01.882 "flush": false, 00:19:01.882 "reset": true, 00:19:01.882 "nvme_admin": false, 00:19:01.882 "nvme_io": false, 00:19:01.882 "nvme_io_md": false, 00:19:01.882 "write_zeroes": true, 00:19:01.882 "zcopy": false, 00:19:01.882 "get_zone_info": false, 00:19:01.882 "zone_management": false, 00:19:01.882 "zone_append": false, 00:19:01.882 "compare": false, 00:19:01.882 "compare_and_write": false, 00:19:01.882 "abort": false, 00:19:01.882 "seek_hole": false, 00:19:01.882 "seek_data": false, 00:19:01.882 "copy": false, 00:19:01.882 "nvme_iov_md": false 00:19:01.882 }, 00:19:01.882 "memory_domains": [ 00:19:01.882 { 00:19:01.882 "dma_device_id": "system", 00:19:01.882 "dma_device_type": 1 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.882 "dma_device_type": 2 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "system", 00:19:01.882 "dma_device_type": 1 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.882 "dma_device_type": 2 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "system", 00:19:01.882 "dma_device_type": 1 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.882 "dma_device_type": 2 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "system", 00:19:01.882 "dma_device_type": 1 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.882 "dma_device_type": 2 00:19:01.882 } 00:19:01.882 ], 00:19:01.882 "driver_specific": { 00:19:01.882 "raid": { 00:19:01.882 "uuid": "a29493dc-b8ba-46de-8bb0-c30bf32983dd", 00:19:01.882 "strip_size_kb": 0, 00:19:01.882 "state": "online", 00:19:01.882 "raid_level": "raid1", 00:19:01.882 "superblock": true, 00:19:01.882 "num_base_bdevs": 4, 00:19:01.882 "num_base_bdevs_discovered": 4, 00:19:01.882 "num_base_bdevs_operational": 4, 00:19:01.882 "base_bdevs_list": [ 00:19:01.882 { 00:19:01.882 "name": "NewBaseBdev", 00:19:01.882 "uuid": "8bc4158f-a69a-40a5-a405-eade6e6a58e7", 00:19:01.882 "is_configured": true, 00:19:01.882 "data_offset": 2048, 00:19:01.882 "data_size": 63488 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "name": "BaseBdev2", 00:19:01.882 "uuid": "13683efb-75ee-40d3-882c-dba318a395aa", 00:19:01.882 "is_configured": true, 00:19:01.882 "data_offset": 2048, 00:19:01.882 "data_size": 63488 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "name": "BaseBdev3", 00:19:01.882 "uuid": "292408e8-614c-48fb-8622-be615376641c", 00:19:01.882 "is_configured": true, 00:19:01.882 "data_offset": 2048, 00:19:01.882 "data_size": 63488 00:19:01.882 }, 00:19:01.882 { 00:19:01.882 "name": "BaseBdev4", 00:19:01.882 "uuid": "ceeb0a6f-01dc-4675-83cb-bc70d9b11c03", 00:19:01.882 "is_configured": true, 00:19:01.882 "data_offset": 2048, 00:19:01.882 "data_size": 63488 00:19:01.882 } 00:19:01.882 ] 00:19:01.882 } 00:19:01.882 } 00:19:01.882 }' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:01.882 BaseBdev2 00:19:01.882 BaseBdev3 00:19:01.882 BaseBdev4' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.882 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.142 [2024-11-04 14:51:31.978196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:02.142 [2024-11-04 14:51:31.978455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.142 [2024-11-04 14:51:31.978662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.142 [2024-11-04 14:51:31.979153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.142 [2024-11-04 14:51:31.979190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74154 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74154 ']' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74154 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.142 14:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74154 00:19:02.142 killing process with pid 74154 00:19:02.142 14:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:02.142 14:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:02.142 14:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74154' 00:19:02.142 14:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74154 00:19:02.142 [2024-11-04 14:51:32.017204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.142 14:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74154 00:19:02.709 [2024-11-04 14:51:32.369894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.643 14:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:03.643 ************************************ 00:19:03.643 END TEST raid_state_function_test_sb 00:19:03.643 ************************************ 00:19:03.643 00:19:03.643 real 0m12.710s 00:19:03.643 user 0m21.111s 00:19:03.643 sys 0m1.772s 00:19:03.643 14:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:03.643 14:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.643 14:51:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:03.643 14:51:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:03.643 14:51:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:03.643 14:51:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.643 ************************************ 00:19:03.643 START TEST raid_superblock_test 00:19:03.643 ************************************ 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74840 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74840 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74840 ']' 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:03.643 14:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.901 [2024-11-04 14:51:33.572361] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:19:03.901 [2024-11-04 14:51:33.572744] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74840 ] 00:19:03.901 [2024-11-04 14:51:33.754922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.159 [2024-11-04 14:51:33.883585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.417 [2024-11-04 14:51:34.088011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.417 [2024-11-04 14:51:34.088070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.695 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.967 malloc1 00:19:04.967 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.967 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.967 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.967 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.967 [2024-11-04 14:51:34.600832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.967 [2024-11-04 14:51:34.600937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.967 [2024-11-04 14:51:34.600970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:04.967 [2024-11-04 14:51:34.600985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.967 [2024-11-04 14:51:34.603763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.967 [2024-11-04 14:51:34.603809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.967 pt1 00:19:04.967 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.967 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 malloc2 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-04 14:51:34.653017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.968 [2024-11-04 14:51:34.653326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.968 [2024-11-04 14:51:34.653368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:04.968 [2024-11-04 14:51:34.653384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.968 [2024-11-04 14:51:34.656165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.968 [2024-11-04 14:51:34.656211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.968 pt2 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 malloc3 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-04 14:51:34.716820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:04.968 [2024-11-04 14:51:34.717096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.968 [2024-11-04 14:51:34.717141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:04.968 [2024-11-04 14:51:34.717158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.968 [2024-11-04 14:51:34.719899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.968 [2024-11-04 14:51:34.719944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:04.968 pt3 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 malloc4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-04 14:51:34.772858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:04.968 [2024-11-04 14:51:34.772929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.968 [2024-11-04 14:51:34.772960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:04.968 [2024-11-04 14:51:34.772975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.968 [2024-11-04 14:51:34.775748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.968 [2024-11-04 14:51:34.775792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:04.968 pt4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 [2024-11-04 14:51:34.780890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.968 [2024-11-04 14:51:34.783514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.968 [2024-11-04 14:51:34.783737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:04.968 [2024-11-04 14:51:34.783853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:04.968 [2024-11-04 14:51:34.784198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:04.968 [2024-11-04 14:51:34.784343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:04.968 [2024-11-04 14:51:34.784813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:04.968 [2024-11-04 14:51:34.785170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:04.968 [2024-11-04 14:51:34.785200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:04.968 [2024-11-04 14:51:34.785453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.968 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.968 "name": "raid_bdev1", 00:19:04.968 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:04.968 "strip_size_kb": 0, 00:19:04.968 "state": "online", 00:19:04.968 "raid_level": "raid1", 00:19:04.968 "superblock": true, 00:19:04.968 "num_base_bdevs": 4, 00:19:04.968 "num_base_bdevs_discovered": 4, 00:19:04.968 "num_base_bdevs_operational": 4, 00:19:04.968 "base_bdevs_list": [ 00:19:04.968 { 00:19:04.968 "name": "pt1", 00:19:04.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.968 "is_configured": true, 00:19:04.968 "data_offset": 2048, 00:19:04.968 "data_size": 63488 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "name": "pt2", 00:19:04.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.968 "is_configured": true, 00:19:04.968 "data_offset": 2048, 00:19:04.968 "data_size": 63488 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "name": "pt3", 00:19:04.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.968 "is_configured": true, 00:19:04.968 "data_offset": 2048, 00:19:04.968 "data_size": 63488 00:19:04.969 }, 00:19:04.969 { 00:19:04.969 "name": "pt4", 00:19:04.969 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.969 "is_configured": true, 00:19:04.969 "data_offset": 2048, 00:19:04.969 "data_size": 63488 00:19:04.969 } 00:19:04.969 ] 00:19:04.969 }' 00:19:04.969 14:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.969 14:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.536 [2024-11-04 14:51:35.289972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.536 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.536 "name": "raid_bdev1", 00:19:05.536 "aliases": [ 00:19:05.536 "3439a31d-0fe5-4636-8ce4-1410ddd91631" 00:19:05.536 ], 00:19:05.536 "product_name": "Raid Volume", 00:19:05.536 "block_size": 512, 00:19:05.536 "num_blocks": 63488, 00:19:05.536 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:05.536 "assigned_rate_limits": { 00:19:05.536 "rw_ios_per_sec": 0, 00:19:05.536 "rw_mbytes_per_sec": 0, 00:19:05.536 "r_mbytes_per_sec": 0, 00:19:05.536 "w_mbytes_per_sec": 0 00:19:05.536 }, 00:19:05.536 "claimed": false, 00:19:05.536 "zoned": false, 00:19:05.536 "supported_io_types": { 00:19:05.536 "read": true, 00:19:05.536 "write": true, 00:19:05.536 "unmap": false, 00:19:05.536 "flush": false, 00:19:05.536 "reset": true, 00:19:05.536 "nvme_admin": false, 00:19:05.536 "nvme_io": false, 00:19:05.536 "nvme_io_md": false, 00:19:05.536 "write_zeroes": true, 00:19:05.536 "zcopy": false, 00:19:05.536 "get_zone_info": false, 00:19:05.536 "zone_management": false, 00:19:05.536 "zone_append": false, 00:19:05.536 "compare": false, 00:19:05.536 "compare_and_write": false, 00:19:05.536 "abort": false, 00:19:05.536 "seek_hole": false, 00:19:05.536 "seek_data": false, 00:19:05.536 "copy": false, 00:19:05.536 "nvme_iov_md": false 00:19:05.536 }, 00:19:05.536 "memory_domains": [ 00:19:05.536 { 00:19:05.536 "dma_device_id": "system", 00:19:05.536 "dma_device_type": 1 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.536 "dma_device_type": 2 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "system", 00:19:05.536 "dma_device_type": 1 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.536 "dma_device_type": 2 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "system", 00:19:05.536 "dma_device_type": 1 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.536 "dma_device_type": 2 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "system", 00:19:05.536 "dma_device_type": 1 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.536 "dma_device_type": 2 00:19:05.536 } 00:19:05.536 ], 00:19:05.536 "driver_specific": { 00:19:05.536 "raid": { 00:19:05.536 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:05.536 "strip_size_kb": 0, 00:19:05.536 "state": "online", 00:19:05.536 "raid_level": "raid1", 00:19:05.536 "superblock": true, 00:19:05.536 "num_base_bdevs": 4, 00:19:05.536 "num_base_bdevs_discovered": 4, 00:19:05.536 "num_base_bdevs_operational": 4, 00:19:05.536 "base_bdevs_list": [ 00:19:05.536 { 00:19:05.536 "name": "pt1", 00:19:05.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.536 "is_configured": true, 00:19:05.536 "data_offset": 2048, 00:19:05.536 "data_size": 63488 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "name": "pt2", 00:19:05.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.536 "is_configured": true, 00:19:05.536 "data_offset": 2048, 00:19:05.536 "data_size": 63488 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "name": "pt3", 00:19:05.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.536 "is_configured": true, 00:19:05.536 "data_offset": 2048, 00:19:05.536 "data_size": 63488 00:19:05.536 }, 00:19:05.536 { 00:19:05.536 "name": "pt4", 00:19:05.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.537 "is_configured": true, 00:19:05.537 "data_offset": 2048, 00:19:05.537 "data_size": 63488 00:19:05.537 } 00:19:05.537 ] 00:19:05.537 } 00:19:05.537 } 00:19:05.537 }' 00:19:05.537 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.537 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:05.537 pt2 00:19:05.537 pt3 00:19:05.537 pt4' 00:19:05.537 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.795 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.795 [2024-11-04 14:51:35.681950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.054 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.054 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3439a31d-0fe5-4636-8ce4-1410ddd91631 00:19:06.054 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3439a31d-0fe5-4636-8ce4-1410ddd91631 ']' 00:19:06.054 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.054 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.054 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.054 [2024-11-04 14:51:35.725609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.054 [2024-11-04 14:51:35.725826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.055 [2024-11-04 14:51:35.726020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.055 [2024-11-04 14:51:35.726273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.055 [2024-11-04 14:51:35.726310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 [2024-11-04 14:51:35.885669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:06.055 [2024-11-04 14:51:35.888331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:06.055 [2024-11-04 14:51:35.888400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:06.055 [2024-11-04 14:51:35.888451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:06.055 [2024-11-04 14:51:35.888528] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:06.055 [2024-11-04 14:51:35.888600] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:06.055 [2024-11-04 14:51:35.888633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:06.055 [2024-11-04 14:51:35.888664] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:06.055 [2024-11-04 14:51:35.888686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.055 [2024-11-04 14:51:35.888702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:06.055 request: 00:19:06.055 { 00:19:06.055 "name": "raid_bdev1", 00:19:06.055 "raid_level": "raid1", 00:19:06.055 "base_bdevs": [ 00:19:06.055 "malloc1", 00:19:06.055 "malloc2", 00:19:06.055 "malloc3", 00:19:06.055 "malloc4" 00:19:06.055 ], 00:19:06.055 "superblock": false, 00:19:06.055 "method": "bdev_raid_create", 00:19:06.055 "req_id": 1 00:19:06.055 } 00:19:06.055 Got JSON-RPC error response 00:19:06.055 response: 00:19:06.055 { 00:19:06.055 "code": -17, 00:19:06.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:06.055 } 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.055 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.314 [2024-11-04 14:51:35.945660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.314 [2024-11-04 14:51:35.945851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.314 [2024-11-04 14:51:35.945918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:06.314 [2024-11-04 14:51:35.946029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.314 [2024-11-04 14:51:35.948932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.314 [2024-11-04 14:51:35.948982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.314 [2024-11-04 14:51:35.949067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:06.314 [2024-11-04 14:51:35.949141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.314 pt1 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.314 14:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.314 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.314 "name": "raid_bdev1", 00:19:06.314 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:06.314 "strip_size_kb": 0, 00:19:06.314 "state": "configuring", 00:19:06.314 "raid_level": "raid1", 00:19:06.314 "superblock": true, 00:19:06.314 "num_base_bdevs": 4, 00:19:06.314 "num_base_bdevs_discovered": 1, 00:19:06.314 "num_base_bdevs_operational": 4, 00:19:06.314 "base_bdevs_list": [ 00:19:06.314 { 00:19:06.314 "name": "pt1", 00:19:06.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.314 "is_configured": true, 00:19:06.314 "data_offset": 2048, 00:19:06.314 "data_size": 63488 00:19:06.314 }, 00:19:06.314 { 00:19:06.314 "name": null, 00:19:06.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.314 "is_configured": false, 00:19:06.314 "data_offset": 2048, 00:19:06.314 "data_size": 63488 00:19:06.314 }, 00:19:06.314 { 00:19:06.314 "name": null, 00:19:06.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.314 "is_configured": false, 00:19:06.314 "data_offset": 2048, 00:19:06.314 "data_size": 63488 00:19:06.314 }, 00:19:06.314 { 00:19:06.314 "name": null, 00:19:06.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.315 "is_configured": false, 00:19:06.315 "data_offset": 2048, 00:19:06.315 "data_size": 63488 00:19:06.315 } 00:19:06.315 ] 00:19:06.315 }' 00:19:06.315 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.315 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.573 [2024-11-04 14:51:36.441878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.573 [2024-11-04 14:51:36.442196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.573 [2024-11-04 14:51:36.442251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:06.573 [2024-11-04 14:51:36.442273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.573 [2024-11-04 14:51:36.442846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.573 [2024-11-04 14:51:36.442893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.573 [2024-11-04 14:51:36.442997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:06.573 [2024-11-04 14:51:36.443041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.573 pt2 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.573 [2024-11-04 14:51:36.449840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.573 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.574 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.832 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.832 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.832 "name": "raid_bdev1", 00:19:06.832 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:06.832 "strip_size_kb": 0, 00:19:06.832 "state": "configuring", 00:19:06.832 "raid_level": "raid1", 00:19:06.832 "superblock": true, 00:19:06.832 "num_base_bdevs": 4, 00:19:06.832 "num_base_bdevs_discovered": 1, 00:19:06.832 "num_base_bdevs_operational": 4, 00:19:06.832 "base_bdevs_list": [ 00:19:06.832 { 00:19:06.832 "name": "pt1", 00:19:06.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.832 "is_configured": true, 00:19:06.832 "data_offset": 2048, 00:19:06.832 "data_size": 63488 00:19:06.832 }, 00:19:06.832 { 00:19:06.832 "name": null, 00:19:06.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.832 "is_configured": false, 00:19:06.832 "data_offset": 0, 00:19:06.832 "data_size": 63488 00:19:06.832 }, 00:19:06.832 { 00:19:06.832 "name": null, 00:19:06.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.832 "is_configured": false, 00:19:06.832 "data_offset": 2048, 00:19:06.832 "data_size": 63488 00:19:06.832 }, 00:19:06.832 { 00:19:06.832 "name": null, 00:19:06.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.832 "is_configured": false, 00:19:06.832 "data_offset": 2048, 00:19:06.832 "data_size": 63488 00:19:06.832 } 00:19:06.832 ] 00:19:06.832 }' 00:19:06.832 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.832 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.091 [2024-11-04 14:51:36.965997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.091 [2024-11-04 14:51:36.966320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.091 [2024-11-04 14:51:36.966495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:07.091 [2024-11-04 14:51:36.966524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.091 [2024-11-04 14:51:36.967106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.091 [2024-11-04 14:51:36.967132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.091 [2024-11-04 14:51:36.967383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.091 [2024-11-04 14:51:36.967457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.091 pt2 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.091 [2024-11-04 14:51:36.973943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:07.091 [2024-11-04 14:51:36.974133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.091 [2024-11-04 14:51:36.974203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:07.091 [2024-11-04 14:51:36.974348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.091 [2024-11-04 14:51:36.974816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.091 [2024-11-04 14:51:36.974852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:07.091 [2024-11-04 14:51:36.974932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:07.091 [2024-11-04 14:51:36.974959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:07.091 pt3 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:07.091 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:07.092 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:07.092 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.092 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.092 [2024-11-04 14:51:36.981924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:07.092 [2024-11-04 14:51:36.982110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.092 [2024-11-04 14:51:36.982181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:07.350 [2024-11-04 14:51:36.982365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.350 [2024-11-04 14:51:36.982870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.350 pt4 00:19:07.350 [2024-11-04 14:51:36.983018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:07.350 [2024-11-04 14:51:36.983123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:07.350 [2024-11-04 14:51:36.983153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:07.350 [2024-11-04 14:51:36.983361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:07.350 [2024-11-04 14:51:36.983378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:07.350 [2024-11-04 14:51:36.983707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:07.350 [2024-11-04 14:51:36.983902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:07.350 [2024-11-04 14:51:36.983928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:07.350 [2024-11-04 14:51:36.984090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.350 14:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.350 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.350 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.350 "name": "raid_bdev1", 00:19:07.350 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:07.350 "strip_size_kb": 0, 00:19:07.350 "state": "online", 00:19:07.350 "raid_level": "raid1", 00:19:07.350 "superblock": true, 00:19:07.350 "num_base_bdevs": 4, 00:19:07.350 "num_base_bdevs_discovered": 4, 00:19:07.350 "num_base_bdevs_operational": 4, 00:19:07.350 "base_bdevs_list": [ 00:19:07.350 { 00:19:07.350 "name": "pt1", 00:19:07.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.350 "is_configured": true, 00:19:07.350 "data_offset": 2048, 00:19:07.350 "data_size": 63488 00:19:07.350 }, 00:19:07.350 { 00:19:07.350 "name": "pt2", 00:19:07.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.350 "is_configured": true, 00:19:07.351 "data_offset": 2048, 00:19:07.351 "data_size": 63488 00:19:07.351 }, 00:19:07.351 { 00:19:07.351 "name": "pt3", 00:19:07.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.351 "is_configured": true, 00:19:07.351 "data_offset": 2048, 00:19:07.351 "data_size": 63488 00:19:07.351 }, 00:19:07.351 { 00:19:07.351 "name": "pt4", 00:19:07.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.351 "is_configured": true, 00:19:07.351 "data_offset": 2048, 00:19:07.351 "data_size": 63488 00:19:07.351 } 00:19:07.351 ] 00:19:07.351 }' 00:19:07.351 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.351 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.917 [2024-11-04 14:51:37.530604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.917 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.917 "name": "raid_bdev1", 00:19:07.917 "aliases": [ 00:19:07.917 "3439a31d-0fe5-4636-8ce4-1410ddd91631" 00:19:07.917 ], 00:19:07.917 "product_name": "Raid Volume", 00:19:07.917 "block_size": 512, 00:19:07.917 "num_blocks": 63488, 00:19:07.917 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:07.917 "assigned_rate_limits": { 00:19:07.917 "rw_ios_per_sec": 0, 00:19:07.917 "rw_mbytes_per_sec": 0, 00:19:07.917 "r_mbytes_per_sec": 0, 00:19:07.917 "w_mbytes_per_sec": 0 00:19:07.917 }, 00:19:07.918 "claimed": false, 00:19:07.918 "zoned": false, 00:19:07.918 "supported_io_types": { 00:19:07.918 "read": true, 00:19:07.918 "write": true, 00:19:07.918 "unmap": false, 00:19:07.918 "flush": false, 00:19:07.918 "reset": true, 00:19:07.918 "nvme_admin": false, 00:19:07.918 "nvme_io": false, 00:19:07.918 "nvme_io_md": false, 00:19:07.918 "write_zeroes": true, 00:19:07.918 "zcopy": false, 00:19:07.918 "get_zone_info": false, 00:19:07.918 "zone_management": false, 00:19:07.918 "zone_append": false, 00:19:07.918 "compare": false, 00:19:07.918 "compare_and_write": false, 00:19:07.918 "abort": false, 00:19:07.918 "seek_hole": false, 00:19:07.918 "seek_data": false, 00:19:07.918 "copy": false, 00:19:07.918 "nvme_iov_md": false 00:19:07.918 }, 00:19:07.918 "memory_domains": [ 00:19:07.918 { 00:19:07.918 "dma_device_id": "system", 00:19:07.918 "dma_device_type": 1 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.918 "dma_device_type": 2 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "system", 00:19:07.918 "dma_device_type": 1 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.918 "dma_device_type": 2 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "system", 00:19:07.918 "dma_device_type": 1 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.918 "dma_device_type": 2 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "system", 00:19:07.918 "dma_device_type": 1 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.918 "dma_device_type": 2 00:19:07.918 } 00:19:07.918 ], 00:19:07.918 "driver_specific": { 00:19:07.918 "raid": { 00:19:07.918 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:07.918 "strip_size_kb": 0, 00:19:07.918 "state": "online", 00:19:07.918 "raid_level": "raid1", 00:19:07.918 "superblock": true, 00:19:07.918 "num_base_bdevs": 4, 00:19:07.918 "num_base_bdevs_discovered": 4, 00:19:07.918 "num_base_bdevs_operational": 4, 00:19:07.918 "base_bdevs_list": [ 00:19:07.918 { 00:19:07.918 "name": "pt1", 00:19:07.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.918 "is_configured": true, 00:19:07.918 "data_offset": 2048, 00:19:07.918 "data_size": 63488 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "name": "pt2", 00:19:07.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.918 "is_configured": true, 00:19:07.918 "data_offset": 2048, 00:19:07.918 "data_size": 63488 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "name": "pt3", 00:19:07.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.918 "is_configured": true, 00:19:07.918 "data_offset": 2048, 00:19:07.918 "data_size": 63488 00:19:07.918 }, 00:19:07.918 { 00:19:07.918 "name": "pt4", 00:19:07.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.918 "is_configured": true, 00:19:07.918 "data_offset": 2048, 00:19:07.918 "data_size": 63488 00:19:07.918 } 00:19:07.918 ] 00:19:07.918 } 00:19:07.918 } 00:19:07.918 }' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:07.918 pt2 00:19:07.918 pt3 00:19:07.918 pt4' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.918 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.177 [2024-11-04 14:51:37.914550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3439a31d-0fe5-4636-8ce4-1410ddd91631 '!=' 3439a31d-0fe5-4636-8ce4-1410ddd91631 ']' 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.177 [2024-11-04 14:51:37.966263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.177 14:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.177 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.177 "name": "raid_bdev1", 00:19:08.177 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:08.177 "strip_size_kb": 0, 00:19:08.177 "state": "online", 00:19:08.177 "raid_level": "raid1", 00:19:08.177 "superblock": true, 00:19:08.177 "num_base_bdevs": 4, 00:19:08.177 "num_base_bdevs_discovered": 3, 00:19:08.177 "num_base_bdevs_operational": 3, 00:19:08.177 "base_bdevs_list": [ 00:19:08.177 { 00:19:08.177 "name": null, 00:19:08.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.177 "is_configured": false, 00:19:08.177 "data_offset": 0, 00:19:08.177 "data_size": 63488 00:19:08.177 }, 00:19:08.177 { 00:19:08.177 "name": "pt2", 00:19:08.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.177 "is_configured": true, 00:19:08.177 "data_offset": 2048, 00:19:08.177 "data_size": 63488 00:19:08.177 }, 00:19:08.177 { 00:19:08.177 "name": "pt3", 00:19:08.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.177 "is_configured": true, 00:19:08.177 "data_offset": 2048, 00:19:08.177 "data_size": 63488 00:19:08.177 }, 00:19:08.177 { 00:19:08.177 "name": "pt4", 00:19:08.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.177 "is_configured": true, 00:19:08.177 "data_offset": 2048, 00:19:08.177 "data_size": 63488 00:19:08.177 } 00:19:08.177 ] 00:19:08.177 }' 00:19:08.177 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.177 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.743 [2024-11-04 14:51:38.502403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.743 [2024-11-04 14:51:38.502469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.743 [2024-11-04 14:51:38.502567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.743 [2024-11-04 14:51:38.502677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.743 [2024-11-04 14:51:38.502694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:08.743 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.744 [2024-11-04 14:51:38.590411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:08.744 [2024-11-04 14:51:38.590772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.744 [2024-11-04 14:51:38.590818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:08.744 [2024-11-04 14:51:38.590834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.744 [2024-11-04 14:51:38.593888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.744 [2024-11-04 14:51:38.593933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:08.744 [2024-11-04 14:51:38.594049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:08.744 [2024-11-04 14:51:38.594109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.744 pt2 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.744 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.002 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.002 "name": "raid_bdev1", 00:19:09.002 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:09.002 "strip_size_kb": 0, 00:19:09.002 "state": "configuring", 00:19:09.002 "raid_level": "raid1", 00:19:09.002 "superblock": true, 00:19:09.002 "num_base_bdevs": 4, 00:19:09.002 "num_base_bdevs_discovered": 1, 00:19:09.002 "num_base_bdevs_operational": 3, 00:19:09.002 "base_bdevs_list": [ 00:19:09.002 { 00:19:09.002 "name": null, 00:19:09.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.003 "is_configured": false, 00:19:09.003 "data_offset": 2048, 00:19:09.003 "data_size": 63488 00:19:09.003 }, 00:19:09.003 { 00:19:09.003 "name": "pt2", 00:19:09.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.003 "is_configured": true, 00:19:09.003 "data_offset": 2048, 00:19:09.003 "data_size": 63488 00:19:09.003 }, 00:19:09.003 { 00:19:09.003 "name": null, 00:19:09.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.003 "is_configured": false, 00:19:09.003 "data_offset": 2048, 00:19:09.003 "data_size": 63488 00:19:09.003 }, 00:19:09.003 { 00:19:09.003 "name": null, 00:19:09.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.003 "is_configured": false, 00:19:09.003 "data_offset": 2048, 00:19:09.003 "data_size": 63488 00:19:09.003 } 00:19:09.003 ] 00:19:09.003 }' 00:19:09.003 14:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.003 14:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.261 [2024-11-04 14:51:39.118583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:09.261 [2024-11-04 14:51:39.118892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.261 [2024-11-04 14:51:39.118940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:09.261 [2024-11-04 14:51:39.118957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.261 [2024-11-04 14:51:39.119562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.261 [2024-11-04 14:51:39.119589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:09.261 [2024-11-04 14:51:39.119698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:09.261 [2024-11-04 14:51:39.119731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:09.261 pt3 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.261 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.519 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.519 "name": "raid_bdev1", 00:19:09.519 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:09.519 "strip_size_kb": 0, 00:19:09.519 "state": "configuring", 00:19:09.519 "raid_level": "raid1", 00:19:09.519 "superblock": true, 00:19:09.519 "num_base_bdevs": 4, 00:19:09.519 "num_base_bdevs_discovered": 2, 00:19:09.519 "num_base_bdevs_operational": 3, 00:19:09.519 "base_bdevs_list": [ 00:19:09.519 { 00:19:09.519 "name": null, 00:19:09.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.519 "is_configured": false, 00:19:09.519 "data_offset": 2048, 00:19:09.519 "data_size": 63488 00:19:09.519 }, 00:19:09.519 { 00:19:09.519 "name": "pt2", 00:19:09.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.519 "is_configured": true, 00:19:09.519 "data_offset": 2048, 00:19:09.519 "data_size": 63488 00:19:09.519 }, 00:19:09.519 { 00:19:09.519 "name": "pt3", 00:19:09.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.519 "is_configured": true, 00:19:09.519 "data_offset": 2048, 00:19:09.519 "data_size": 63488 00:19:09.519 }, 00:19:09.519 { 00:19:09.519 "name": null, 00:19:09.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.519 "is_configured": false, 00:19:09.519 "data_offset": 2048, 00:19:09.519 "data_size": 63488 00:19:09.519 } 00:19:09.519 ] 00:19:09.519 }' 00:19:09.519 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.519 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.778 [2024-11-04 14:51:39.658760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:09.778 [2024-11-04 14:51:39.659140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.778 [2024-11-04 14:51:39.659189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:09.778 [2024-11-04 14:51:39.659205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.778 [2024-11-04 14:51:39.659834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.778 [2024-11-04 14:51:39.659860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:09.778 [2024-11-04 14:51:39.660026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:09.778 [2024-11-04 14:51:39.660066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:09.778 [2024-11-04 14:51:39.660237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:09.778 [2024-11-04 14:51:39.660254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:09.778 [2024-11-04 14:51:39.660592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:09.778 [2024-11-04 14:51:39.660784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:09.778 [2024-11-04 14:51:39.660804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:09.778 [2024-11-04 14:51:39.660976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.778 pt4 00:19:09.778 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.779 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.037 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.037 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.037 "name": "raid_bdev1", 00:19:10.037 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:10.037 "strip_size_kb": 0, 00:19:10.037 "state": "online", 00:19:10.037 "raid_level": "raid1", 00:19:10.037 "superblock": true, 00:19:10.037 "num_base_bdevs": 4, 00:19:10.037 "num_base_bdevs_discovered": 3, 00:19:10.037 "num_base_bdevs_operational": 3, 00:19:10.037 "base_bdevs_list": [ 00:19:10.037 { 00:19:10.037 "name": null, 00:19:10.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.037 "is_configured": false, 00:19:10.037 "data_offset": 2048, 00:19:10.037 "data_size": 63488 00:19:10.037 }, 00:19:10.037 { 00:19:10.037 "name": "pt2", 00:19:10.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.037 "is_configured": true, 00:19:10.037 "data_offset": 2048, 00:19:10.037 "data_size": 63488 00:19:10.037 }, 00:19:10.037 { 00:19:10.037 "name": "pt3", 00:19:10.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:10.037 "is_configured": true, 00:19:10.037 "data_offset": 2048, 00:19:10.037 "data_size": 63488 00:19:10.037 }, 00:19:10.037 { 00:19:10.037 "name": "pt4", 00:19:10.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:10.037 "is_configured": true, 00:19:10.037 "data_offset": 2048, 00:19:10.037 "data_size": 63488 00:19:10.037 } 00:19:10.037 ] 00:19:10.037 }' 00:19:10.037 14:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.037 14:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.296 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.296 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.296 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.296 [2024-11-04 14:51:40.182895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.296 [2024-11-04 14:51:40.182957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.296 [2024-11-04 14:51:40.183053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.296 [2024-11-04 14:51:40.183150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.296 [2024-11-04 14:51:40.183170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.554 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.555 [2024-11-04 14:51:40.250880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:10.555 [2024-11-04 14:51:40.251192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.555 [2024-11-04 14:51:40.251242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:10.555 [2024-11-04 14:51:40.251263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.555 [2024-11-04 14:51:40.254268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.555 [2024-11-04 14:51:40.254317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:10.555 [2024-11-04 14:51:40.254423] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:10.555 [2024-11-04 14:51:40.254487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:10.555 [2024-11-04 14:51:40.254648] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:10.555 [2024-11-04 14:51:40.254672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.555 [2024-11-04 14:51:40.254693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:10.555 [2024-11-04 14:51:40.254773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:10.555 [2024-11-04 14:51:40.254930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:10.555 pt1 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.555 "name": "raid_bdev1", 00:19:10.555 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:10.555 "strip_size_kb": 0, 00:19:10.555 "state": "configuring", 00:19:10.555 "raid_level": "raid1", 00:19:10.555 "superblock": true, 00:19:10.555 "num_base_bdevs": 4, 00:19:10.555 "num_base_bdevs_discovered": 2, 00:19:10.555 "num_base_bdevs_operational": 3, 00:19:10.555 "base_bdevs_list": [ 00:19:10.555 { 00:19:10.555 "name": null, 00:19:10.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.555 "is_configured": false, 00:19:10.555 "data_offset": 2048, 00:19:10.555 "data_size": 63488 00:19:10.555 }, 00:19:10.555 { 00:19:10.555 "name": "pt2", 00:19:10.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.555 "is_configured": true, 00:19:10.555 "data_offset": 2048, 00:19:10.555 "data_size": 63488 00:19:10.555 }, 00:19:10.555 { 00:19:10.555 "name": "pt3", 00:19:10.555 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:10.555 "is_configured": true, 00:19:10.555 "data_offset": 2048, 00:19:10.555 "data_size": 63488 00:19:10.555 }, 00:19:10.555 { 00:19:10.555 "name": null, 00:19:10.555 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:10.555 "is_configured": false, 00:19:10.555 "data_offset": 2048, 00:19:10.555 "data_size": 63488 00:19:10.555 } 00:19:10.555 ] 00:19:10.555 }' 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.555 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.124 [2024-11-04 14:51:40.835163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:11.124 [2024-11-04 14:51:40.835498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.124 [2024-11-04 14:51:40.835582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:11.124 [2024-11-04 14:51:40.835762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.124 [2024-11-04 14:51:40.836367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.124 [2024-11-04 14:51:40.836400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:11.124 [2024-11-04 14:51:40.836512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:11.124 [2024-11-04 14:51:40.836563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:11.124 [2024-11-04 14:51:40.836752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:11.124 [2024-11-04 14:51:40.836776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:11.124 [2024-11-04 14:51:40.837102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:11.124 [2024-11-04 14:51:40.837321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:11.124 [2024-11-04 14:51:40.837344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:11.124 [2024-11-04 14:51:40.837517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.124 pt4 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.124 "name": "raid_bdev1", 00:19:11.124 "uuid": "3439a31d-0fe5-4636-8ce4-1410ddd91631", 00:19:11.124 "strip_size_kb": 0, 00:19:11.124 "state": "online", 00:19:11.124 "raid_level": "raid1", 00:19:11.124 "superblock": true, 00:19:11.124 "num_base_bdevs": 4, 00:19:11.124 "num_base_bdevs_discovered": 3, 00:19:11.124 "num_base_bdevs_operational": 3, 00:19:11.124 "base_bdevs_list": [ 00:19:11.124 { 00:19:11.124 "name": null, 00:19:11.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.124 "is_configured": false, 00:19:11.124 "data_offset": 2048, 00:19:11.124 "data_size": 63488 00:19:11.124 }, 00:19:11.124 { 00:19:11.124 "name": "pt2", 00:19:11.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.124 "is_configured": true, 00:19:11.124 "data_offset": 2048, 00:19:11.124 "data_size": 63488 00:19:11.124 }, 00:19:11.124 { 00:19:11.124 "name": "pt3", 00:19:11.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:11.124 "is_configured": true, 00:19:11.124 "data_offset": 2048, 00:19:11.124 "data_size": 63488 00:19:11.124 }, 00:19:11.124 { 00:19:11.124 "name": "pt4", 00:19:11.124 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:11.124 "is_configured": true, 00:19:11.124 "data_offset": 2048, 00:19:11.124 "data_size": 63488 00:19:11.124 } 00:19:11.124 ] 00:19:11.124 }' 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.124 14:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.691 [2024-11-04 14:51:41.435692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3439a31d-0fe5-4636-8ce4-1410ddd91631 '!=' 3439a31d-0fe5-4636-8ce4-1410ddd91631 ']' 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74840 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74840 ']' 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74840 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74840 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74840' 00:19:11.691 killing process with pid 74840 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74840 00:19:11.691 [2024-11-04 14:51:41.515700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:11.691 14:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74840 00:19:11.691 [2024-11-04 14:51:41.516011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.691 [2024-11-04 14:51:41.516299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.691 [2024-11-04 14:51:41.516468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:12.257 [2024-11-04 14:51:41.872274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.191 14:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:13.191 00:19:13.191 real 0m9.468s 00:19:13.191 user 0m15.557s 00:19:13.191 sys 0m1.395s 00:19:13.191 14:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:13.191 ************************************ 00:19:13.191 END TEST raid_superblock_test 00:19:13.191 ************************************ 00:19:13.191 14:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.191 14:51:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:19:13.191 14:51:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:13.191 14:51:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:13.191 14:51:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.191 ************************************ 00:19:13.191 START TEST raid_read_error_test 00:19:13.191 ************************************ 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.K4ekSZAiPq 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75334 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75334 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75334 ']' 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.191 14:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.191 14:51:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.191 14:51:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.449 [2024-11-04 14:51:43.094699] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:19:13.449 [2024-11-04 14:51:43.094852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75334 ] 00:19:13.449 [2024-11-04 14:51:43.273495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.707 [2024-11-04 14:51:43.405503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.965 [2024-11-04 14:51:43.613048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.965 [2024-11-04 14:51:43.613121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.531 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 BaseBdev1_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 true 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 [2024-11-04 14:51:44.216958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:14.532 [2024-11-04 14:51:44.217053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.532 [2024-11-04 14:51:44.217084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:14.532 [2024-11-04 14:51:44.217103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.532 [2024-11-04 14:51:44.220096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.532 [2024-11-04 14:51:44.220150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:14.532 BaseBdev1 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 BaseBdev2_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 true 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 [2024-11-04 14:51:44.277544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:14.532 [2024-11-04 14:51:44.277666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.532 [2024-11-04 14:51:44.277696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:14.532 [2024-11-04 14:51:44.277715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.532 [2024-11-04 14:51:44.280650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.532 BaseBdev2 00:19:14.532 [2024-11-04 14:51:44.280937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 BaseBdev3_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 true 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 [2024-11-04 14:51:44.351059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:14.532 [2024-11-04 14:51:44.351150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.532 [2024-11-04 14:51:44.351177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:14.532 [2024-11-04 14:51:44.351196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.532 [2024-11-04 14:51:44.354093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.532 [2024-11-04 14:51:44.354144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:14.532 BaseBdev3 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 BaseBdev4_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 true 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 [2024-11-04 14:51:44.407657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:14.532 [2024-11-04 14:51:44.407939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.532 [2024-11-04 14:51:44.407976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:14.532 [2024-11-04 14:51:44.407997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.532 [2024-11-04 14:51:44.410920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.532 [2024-11-04 14:51:44.411088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:14.532 BaseBdev4 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.532 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 [2024-11-04 14:51:44.415883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.532 [2024-11-04 14:51:44.418493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.532 [2024-11-04 14:51:44.418726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.532 [2024-11-04 14:51:44.418879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:14.532 [2024-11-04 14:51:44.419249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:14.532 [2024-11-04 14:51:44.419424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:14.532 [2024-11-04 14:51:44.419753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:14.532 [2024-11-04 14:51:44.419986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:14.532 [2024-11-04 14:51:44.420003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:14.532 [2024-11-04 14:51:44.420275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.791 "name": "raid_bdev1", 00:19:14.791 "uuid": "3ab5aa10-e928-4f9a-b0dc-5201d07bda89", 00:19:14.791 "strip_size_kb": 0, 00:19:14.791 "state": "online", 00:19:14.791 "raid_level": "raid1", 00:19:14.791 "superblock": true, 00:19:14.791 "num_base_bdevs": 4, 00:19:14.791 "num_base_bdevs_discovered": 4, 00:19:14.791 "num_base_bdevs_operational": 4, 00:19:14.791 "base_bdevs_list": [ 00:19:14.791 { 00:19:14.791 "name": "BaseBdev1", 00:19:14.791 "uuid": "2b5073e8-be5d-54dc-8528-c11e82508eae", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 }, 00:19:14.791 { 00:19:14.791 "name": "BaseBdev2", 00:19:14.791 "uuid": "bd4353d6-2184-5fbe-b190-066573d77b0c", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 }, 00:19:14.791 { 00:19:14.791 "name": "BaseBdev3", 00:19:14.791 "uuid": "c2a7e4b9-541f-5d28-ac6f-32f18184af2b", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 }, 00:19:14.791 { 00:19:14.791 "name": "BaseBdev4", 00:19:14.791 "uuid": "222b6fc6-d571-52e3-9ec7-85f989a08287", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 } 00:19:14.791 ] 00:19:14.791 }' 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.791 14:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.357 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:15.357 14:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:15.357 [2024-11-04 14:51:45.073877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.292 14:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.292 14:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.292 "name": "raid_bdev1", 00:19:16.292 "uuid": "3ab5aa10-e928-4f9a-b0dc-5201d07bda89", 00:19:16.292 "strip_size_kb": 0, 00:19:16.292 "state": "online", 00:19:16.292 "raid_level": "raid1", 00:19:16.292 "superblock": true, 00:19:16.292 "num_base_bdevs": 4, 00:19:16.292 "num_base_bdevs_discovered": 4, 00:19:16.292 "num_base_bdevs_operational": 4, 00:19:16.292 "base_bdevs_list": [ 00:19:16.292 { 00:19:16.292 "name": "BaseBdev1", 00:19:16.292 "uuid": "2b5073e8-be5d-54dc-8528-c11e82508eae", 00:19:16.292 "is_configured": true, 00:19:16.292 "data_offset": 2048, 00:19:16.292 "data_size": 63488 00:19:16.292 }, 00:19:16.292 { 00:19:16.292 "name": "BaseBdev2", 00:19:16.292 "uuid": "bd4353d6-2184-5fbe-b190-066573d77b0c", 00:19:16.292 "is_configured": true, 00:19:16.292 "data_offset": 2048, 00:19:16.292 "data_size": 63488 00:19:16.292 }, 00:19:16.292 { 00:19:16.292 "name": "BaseBdev3", 00:19:16.292 "uuid": "c2a7e4b9-541f-5d28-ac6f-32f18184af2b", 00:19:16.292 "is_configured": true, 00:19:16.292 "data_offset": 2048, 00:19:16.292 "data_size": 63488 00:19:16.292 }, 00:19:16.292 { 00:19:16.292 "name": "BaseBdev4", 00:19:16.292 "uuid": "222b6fc6-d571-52e3-9ec7-85f989a08287", 00:19:16.292 "is_configured": true, 00:19:16.292 "data_offset": 2048, 00:19:16.292 "data_size": 63488 00:19:16.292 } 00:19:16.292 ] 00:19:16.292 }' 00:19:16.292 14:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.292 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.859 14:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.859 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.859 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.859 [2024-11-04 14:51:46.480338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.859 [2024-11-04 14:51:46.480720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.859 [2024-11-04 14:51:46.483769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.859 { 00:19:16.859 "results": [ 00:19:16.859 { 00:19:16.859 "job": "raid_bdev1", 00:19:16.859 "core_mask": "0x1", 00:19:16.859 "workload": "randrw", 00:19:16.859 "percentage": 50, 00:19:16.859 "status": "finished", 00:19:16.859 "queue_depth": 1, 00:19:16.859 "io_size": 131072, 00:19:16.859 "runtime": 1.404387, 00:19:16.859 "iops": 8015.596840472035, 00:19:16.859 "mibps": 1001.9496050590044, 00:19:16.860 "io_failed": 0, 00:19:16.860 "io_timeout": 0, 00:19:16.860 "avg_latency_us": 120.771644310207, 00:19:16.860 "min_latency_us": 38.4, 00:19:16.860 "max_latency_us": 1750.1090909090908 00:19:16.860 } 00:19:16.860 ], 00:19:16.860 "core_count": 1 00:19:16.860 } 00:19:16.860 [2024-11-04 14:51:46.483997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.860 [2024-11-04 14:51:46.484162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.860 [2024-11-04 14:51:46.484184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75334 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75334 ']' 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75334 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75334 00:19:16.860 killing process with pid 75334 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75334' 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75334 00:19:16.860 [2024-11-04 14:51:46.522200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.860 14:51:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75334 00:19:17.118 [2024-11-04 14:51:46.789644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.K4ekSZAiPq 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:18.053 00:19:18.053 real 0m4.822s 00:19:18.053 user 0m6.017s 00:19:18.053 sys 0m0.611s 00:19:18.053 ************************************ 00:19:18.053 END TEST raid_read_error_test 00:19:18.053 ************************************ 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.053 14:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.053 14:51:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:19:18.053 14:51:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:18.053 14:51:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:18.053 14:51:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.054 ************************************ 00:19:18.054 START TEST raid_write_error_test 00:19:18.054 ************************************ 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x7KeIOWBmT 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75480 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75480 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75480 ']' 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.054 14:51:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.312 [2024-11-04 14:51:47.996752] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:19:18.312 [2024-11-04 14:51:47.996967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75480 ] 00:19:18.312 [2024-11-04 14:51:48.185358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.571 [2024-11-04 14:51:48.316323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.829 [2024-11-04 14:51:48.522661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.829 [2024-11-04 14:51:48.522725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.397 BaseBdev1_malloc 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.397 true 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.397 [2024-11-04 14:51:49.082392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:19.397 [2024-11-04 14:51:49.082483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.397 [2024-11-04 14:51:49.082514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:19.397 [2024-11-04 14:51:49.082533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.397 [2024-11-04 14:51:49.085371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.397 [2024-11-04 14:51:49.085420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.397 BaseBdev1 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.397 BaseBdev2_malloc 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.397 true 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.397 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 [2024-11-04 14:51:49.142789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:19.398 [2024-11-04 14:51:49.143105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.398 [2024-11-04 14:51:49.143143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:19.398 [2024-11-04 14:51:49.143163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.398 [2024-11-04 14:51:49.145987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.398 [2024-11-04 14:51:49.146039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:19.398 BaseBdev2 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 BaseBdev3_malloc 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 true 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 [2024-11-04 14:51:49.215955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:19.398 [2024-11-04 14:51:49.216288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.398 [2024-11-04 14:51:49.216327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:19.398 [2024-11-04 14:51:49.216347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.398 [2024-11-04 14:51:49.219304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.398 [2024-11-04 14:51:49.219354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:19.398 BaseBdev3 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 BaseBdev4_malloc 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 true 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 [2024-11-04 14:51:49.276661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:19.398 [2024-11-04 14:51:49.277024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.398 [2024-11-04 14:51:49.277064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:19.398 [2024-11-04 14:51:49.277084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.398 [2024-11-04 14:51:49.279911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.398 BaseBdev4 00:19:19.398 [2024-11-04 14:51:49.280094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.398 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.398 [2024-11-04 14:51:49.284807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.657 [2024-11-04 14:51:49.287367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.657 [2024-11-04 14:51:49.287480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.657 [2024-11-04 14:51:49.287583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.657 [2024-11-04 14:51:49.287901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:19.657 [2024-11-04 14:51:49.287925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:19.657 [2024-11-04 14:51:49.288257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:19.658 [2024-11-04 14:51:49.288506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:19.658 [2024-11-04 14:51:49.288522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:19.658 [2024-11-04 14:51:49.288766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.658 "name": "raid_bdev1", 00:19:19.658 "uuid": "3b0266c0-801d-48db-80c6-9b37935039b8", 00:19:19.658 "strip_size_kb": 0, 00:19:19.658 "state": "online", 00:19:19.658 "raid_level": "raid1", 00:19:19.658 "superblock": true, 00:19:19.658 "num_base_bdevs": 4, 00:19:19.658 "num_base_bdevs_discovered": 4, 00:19:19.658 "num_base_bdevs_operational": 4, 00:19:19.658 "base_bdevs_list": [ 00:19:19.658 { 00:19:19.658 "name": "BaseBdev1", 00:19:19.658 "uuid": "a38a6bb3-7cb9-5230-9e4e-cfa989f54541", 00:19:19.658 "is_configured": true, 00:19:19.658 "data_offset": 2048, 00:19:19.658 "data_size": 63488 00:19:19.658 }, 00:19:19.658 { 00:19:19.658 "name": "BaseBdev2", 00:19:19.658 "uuid": "7e78db88-f941-5697-acd9-f29f80110ed7", 00:19:19.658 "is_configured": true, 00:19:19.658 "data_offset": 2048, 00:19:19.658 "data_size": 63488 00:19:19.658 }, 00:19:19.658 { 00:19:19.658 "name": "BaseBdev3", 00:19:19.658 "uuid": "3e433277-2cbe-5175-9b0b-4b9e78dd8d33", 00:19:19.658 "is_configured": true, 00:19:19.658 "data_offset": 2048, 00:19:19.658 "data_size": 63488 00:19:19.658 }, 00:19:19.658 { 00:19:19.658 "name": "BaseBdev4", 00:19:19.658 "uuid": "6abe6d0f-a08d-522b-9760-7ec79b143a02", 00:19:19.658 "is_configured": true, 00:19:19.658 "data_offset": 2048, 00:19:19.658 "data_size": 63488 00:19:19.658 } 00:19:19.658 ] 00:19:19.658 }' 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.658 14:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.288 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:20.288 14:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:20.288 [2024-11-04 14:51:49.934502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:21.247 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.248 [2024-11-04 14:51:50.819287] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:21.248 [2024-11-04 14:51:50.819392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.248 [2024-11-04 14:51:50.819674] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.248 "name": "raid_bdev1", 00:19:21.248 "uuid": "3b0266c0-801d-48db-80c6-9b37935039b8", 00:19:21.248 "strip_size_kb": 0, 00:19:21.248 "state": "online", 00:19:21.248 "raid_level": "raid1", 00:19:21.248 "superblock": true, 00:19:21.248 "num_base_bdevs": 4, 00:19:21.248 "num_base_bdevs_discovered": 3, 00:19:21.248 "num_base_bdevs_operational": 3, 00:19:21.248 "base_bdevs_list": [ 00:19:21.248 { 00:19:21.248 "name": null, 00:19:21.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.248 "is_configured": false, 00:19:21.248 "data_offset": 0, 00:19:21.248 "data_size": 63488 00:19:21.248 }, 00:19:21.248 { 00:19:21.248 "name": "BaseBdev2", 00:19:21.248 "uuid": "7e78db88-f941-5697-acd9-f29f80110ed7", 00:19:21.248 "is_configured": true, 00:19:21.248 "data_offset": 2048, 00:19:21.248 "data_size": 63488 00:19:21.248 }, 00:19:21.248 { 00:19:21.248 "name": "BaseBdev3", 00:19:21.248 "uuid": "3e433277-2cbe-5175-9b0b-4b9e78dd8d33", 00:19:21.248 "is_configured": true, 00:19:21.248 "data_offset": 2048, 00:19:21.248 "data_size": 63488 00:19:21.248 }, 00:19:21.248 { 00:19:21.248 "name": "BaseBdev4", 00:19:21.248 "uuid": "6abe6d0f-a08d-522b-9760-7ec79b143a02", 00:19:21.248 "is_configured": true, 00:19:21.248 "data_offset": 2048, 00:19:21.248 "data_size": 63488 00:19:21.248 } 00:19:21.248 ] 00:19:21.248 }' 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.248 14:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.506 14:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:21.506 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.506 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.506 [2024-11-04 14:51:51.342715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.506 [2024-11-04 14:51:51.342773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.506 [2024-11-04 14:51:51.346034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.506 [2024-11-04 14:51:51.346093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.506 [2024-11-04 14:51:51.346247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.506 [2024-11-04 14:51:51.346269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:21.506 { 00:19:21.506 "results": [ 00:19:21.506 { 00:19:21.506 "job": "raid_bdev1", 00:19:21.506 "core_mask": "0x1", 00:19:21.506 "workload": "randrw", 00:19:21.506 "percentage": 50, 00:19:21.506 "status": "finished", 00:19:21.506 "queue_depth": 1, 00:19:21.506 "io_size": 131072, 00:19:21.506 "runtime": 1.405346, 00:19:21.506 "iops": 8109.746638906006, 00:19:21.506 "mibps": 1013.7183298632508, 00:19:21.506 "io_failed": 0, 00:19:21.506 "io_timeout": 0, 00:19:21.506 "avg_latency_us": 119.02643087893945, 00:19:21.506 "min_latency_us": 43.75272727272727, 00:19:21.507 "max_latency_us": 1809.6872727272728 00:19:21.507 } 00:19:21.507 ], 00:19:21.507 "core_count": 1 00:19:21.507 } 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75480 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75480 ']' 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75480 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75480 00:19:21.507 killing process with pid 75480 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75480' 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75480 00:19:21.507 [2024-11-04 14:51:51.381796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.507 14:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75480 00:19:22.072 [2024-11-04 14:51:51.676326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x7KeIOWBmT 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:23.006 ************************************ 00:19:23.006 END TEST raid_write_error_test 00:19:23.006 ************************************ 00:19:23.006 00:19:23.006 real 0m4.913s 00:19:23.006 user 0m6.062s 00:19:23.006 sys 0m0.630s 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.006 14:51:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.006 14:51:52 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:19:23.006 14:51:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:19:23.006 14:51:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:19:23.006 14:51:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:23.006 14:51:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.006 14:51:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.007 ************************************ 00:19:23.007 START TEST raid_rebuild_test 00:19:23.007 ************************************ 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75628 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75628 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75628 ']' 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.007 14:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.272 [2024-11-04 14:51:52.977345] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:19:23.272 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:23.272 Zero copy mechanism will not be used. 00:19:23.272 [2024-11-04 14:51:52.977541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75628 ] 00:19:23.530 [2024-11-04 14:51:53.172546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.530 [2024-11-04 14:51:53.302215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.788 [2024-11-04 14:51:53.510473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.788 [2024-11-04 14:51:53.510558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 BaseBdev1_malloc 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 [2024-11-04 14:51:53.998680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:24.354 [2024-11-04 14:51:53.998783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.354 [2024-11-04 14:51:53.998817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:24.354 [2024-11-04 14:51:53.998837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.354 [2024-11-04 14:51:54.001716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.354 [2024-11-04 14:51:54.001763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:24.354 BaseBdev1 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 BaseBdev2_malloc 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 [2024-11-04 14:51:54.048732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:24.354 [2024-11-04 14:51:54.048820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.354 [2024-11-04 14:51:54.048849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:24.354 [2024-11-04 14:51:54.048869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.354 [2024-11-04 14:51:54.051736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.354 [2024-11-04 14:51:54.051784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:24.354 BaseBdev2 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 spare_malloc 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 spare_delay 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 [2024-11-04 14:51:54.111478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:24.354 [2024-11-04 14:51:54.111571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.354 [2024-11-04 14:51:54.111603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:24.354 [2024-11-04 14:51:54.111620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.354 [2024-11-04 14:51:54.114499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.354 [2024-11-04 14:51:54.114546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:24.354 spare 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 [2024-11-04 14:51:54.119553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.354 [2024-11-04 14:51:54.121977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.354 [2024-11-04 14:51:54.122113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:24.354 [2024-11-04 14:51:54.122136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:24.354 [2024-11-04 14:51:54.122499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:24.354 [2024-11-04 14:51:54.122723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:24.354 [2024-11-04 14:51:54.122749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:24.354 [2024-11-04 14:51:54.122949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.354 "name": "raid_bdev1", 00:19:24.354 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:24.354 "strip_size_kb": 0, 00:19:24.354 "state": "online", 00:19:24.354 "raid_level": "raid1", 00:19:24.354 "superblock": false, 00:19:24.354 "num_base_bdevs": 2, 00:19:24.354 "num_base_bdevs_discovered": 2, 00:19:24.354 "num_base_bdevs_operational": 2, 00:19:24.354 "base_bdevs_list": [ 00:19:24.354 { 00:19:24.354 "name": "BaseBdev1", 00:19:24.354 "uuid": "df36d6d2-21af-55fa-93fc-e7308b513ff8", 00:19:24.354 "is_configured": true, 00:19:24.354 "data_offset": 0, 00:19:24.354 "data_size": 65536 00:19:24.354 }, 00:19:24.354 { 00:19:24.354 "name": "BaseBdev2", 00:19:24.354 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:24.354 "is_configured": true, 00:19:24.354 "data_offset": 0, 00:19:24.354 "data_size": 65536 00:19:24.354 } 00:19:24.354 ] 00:19:24.354 }' 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.354 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:24.921 [2024-11-04 14:51:54.696154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:24.921 14:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:25.486 [2024-11-04 14:51:55.115974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:25.486 /dev/nbd0 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.486 1+0 records in 00:19:25.486 1+0 records out 00:19:25.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434022 s, 9.4 MB/s 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:25.486 14:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:32.045 65536+0 records in 00:19:32.045 65536+0 records out 00:19:32.045 33554432 bytes (34 MB, 32 MiB) copied, 6.47065 s, 5.2 MB/s 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.045 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:32.302 [2024-11-04 14:52:01.944051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.302 [2024-11-04 14:52:01.992144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.302 14:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.302 14:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.302 14:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.302 "name": "raid_bdev1", 00:19:32.302 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:32.302 "strip_size_kb": 0, 00:19:32.302 "state": "online", 00:19:32.302 "raid_level": "raid1", 00:19:32.302 "superblock": false, 00:19:32.302 "num_base_bdevs": 2, 00:19:32.302 "num_base_bdevs_discovered": 1, 00:19:32.302 "num_base_bdevs_operational": 1, 00:19:32.302 "base_bdevs_list": [ 00:19:32.302 { 00:19:32.303 "name": null, 00:19:32.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.303 "is_configured": false, 00:19:32.303 "data_offset": 0, 00:19:32.303 "data_size": 65536 00:19:32.303 }, 00:19:32.303 { 00:19:32.303 "name": "BaseBdev2", 00:19:32.303 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:32.303 "is_configured": true, 00:19:32.303 "data_offset": 0, 00:19:32.303 "data_size": 65536 00:19:32.303 } 00:19:32.303 ] 00:19:32.303 }' 00:19:32.303 14:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.303 14:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.869 14:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.869 14:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.869 14:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.869 [2024-11-04 14:52:02.524392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.869 [2024-11-04 14:52:02.540773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:19:32.869 14:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.869 14:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:32.869 [2024-11-04 14:52:02.543335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.806 "name": "raid_bdev1", 00:19:33.806 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:33.806 "strip_size_kb": 0, 00:19:33.806 "state": "online", 00:19:33.806 "raid_level": "raid1", 00:19:33.806 "superblock": false, 00:19:33.806 "num_base_bdevs": 2, 00:19:33.806 "num_base_bdevs_discovered": 2, 00:19:33.806 "num_base_bdevs_operational": 2, 00:19:33.806 "process": { 00:19:33.806 "type": "rebuild", 00:19:33.806 "target": "spare", 00:19:33.806 "progress": { 00:19:33.806 "blocks": 20480, 00:19:33.806 "percent": 31 00:19:33.806 } 00:19:33.806 }, 00:19:33.806 "base_bdevs_list": [ 00:19:33.806 { 00:19:33.806 "name": "spare", 00:19:33.806 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:33.806 "is_configured": true, 00:19:33.806 "data_offset": 0, 00:19:33.806 "data_size": 65536 00:19:33.806 }, 00:19:33.806 { 00:19:33.806 "name": "BaseBdev2", 00:19:33.806 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:33.806 "is_configured": true, 00:19:33.806 "data_offset": 0, 00:19:33.806 "data_size": 65536 00:19:33.806 } 00:19:33.806 ] 00:19:33.806 }' 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.806 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.065 [2024-11-04 14:52:03.716856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.065 [2024-11-04 14:52:03.752127] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:34.065 [2024-11-04 14:52:03.752280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.065 [2024-11-04 14:52:03.752307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.065 [2024-11-04 14:52:03.752324] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.065 "name": "raid_bdev1", 00:19:34.065 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:34.065 "strip_size_kb": 0, 00:19:34.065 "state": "online", 00:19:34.065 "raid_level": "raid1", 00:19:34.065 "superblock": false, 00:19:34.065 "num_base_bdevs": 2, 00:19:34.065 "num_base_bdevs_discovered": 1, 00:19:34.065 "num_base_bdevs_operational": 1, 00:19:34.065 "base_bdevs_list": [ 00:19:34.065 { 00:19:34.065 "name": null, 00:19:34.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.065 "is_configured": false, 00:19:34.065 "data_offset": 0, 00:19:34.065 "data_size": 65536 00:19:34.065 }, 00:19:34.065 { 00:19:34.065 "name": "BaseBdev2", 00:19:34.065 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:34.065 "is_configured": true, 00:19:34.065 "data_offset": 0, 00:19:34.065 "data_size": 65536 00:19:34.065 } 00:19:34.065 ] 00:19:34.065 }' 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.065 14:52:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.632 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.632 "name": "raid_bdev1", 00:19:34.632 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:34.632 "strip_size_kb": 0, 00:19:34.632 "state": "online", 00:19:34.632 "raid_level": "raid1", 00:19:34.632 "superblock": false, 00:19:34.632 "num_base_bdevs": 2, 00:19:34.633 "num_base_bdevs_discovered": 1, 00:19:34.633 "num_base_bdevs_operational": 1, 00:19:34.633 "base_bdevs_list": [ 00:19:34.633 { 00:19:34.633 "name": null, 00:19:34.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.633 "is_configured": false, 00:19:34.633 "data_offset": 0, 00:19:34.633 "data_size": 65536 00:19:34.633 }, 00:19:34.633 { 00:19:34.633 "name": "BaseBdev2", 00:19:34.633 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:34.633 "is_configured": true, 00:19:34.633 "data_offset": 0, 00:19:34.633 "data_size": 65536 00:19:34.633 } 00:19:34.633 ] 00:19:34.633 }' 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.633 14:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.633 [2024-11-04 14:52:04.509458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.891 [2024-11-04 14:52:04.525731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:19:34.891 14:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.891 14:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:34.891 [2024-11-04 14:52:04.528353] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.827 "name": "raid_bdev1", 00:19:35.827 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:35.827 "strip_size_kb": 0, 00:19:35.827 "state": "online", 00:19:35.827 "raid_level": "raid1", 00:19:35.827 "superblock": false, 00:19:35.827 "num_base_bdevs": 2, 00:19:35.827 "num_base_bdevs_discovered": 2, 00:19:35.827 "num_base_bdevs_operational": 2, 00:19:35.827 "process": { 00:19:35.827 "type": "rebuild", 00:19:35.827 "target": "spare", 00:19:35.827 "progress": { 00:19:35.827 "blocks": 20480, 00:19:35.827 "percent": 31 00:19:35.827 } 00:19:35.827 }, 00:19:35.827 "base_bdevs_list": [ 00:19:35.827 { 00:19:35.827 "name": "spare", 00:19:35.827 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:35.827 "is_configured": true, 00:19:35.827 "data_offset": 0, 00:19:35.827 "data_size": 65536 00:19:35.827 }, 00:19:35.827 { 00:19:35.827 "name": "BaseBdev2", 00:19:35.827 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:35.827 "is_configured": true, 00:19:35.827 "data_offset": 0, 00:19:35.827 "data_size": 65536 00:19:35.827 } 00:19:35.827 ] 00:19:35.827 }' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.827 14:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.085 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.085 "name": "raid_bdev1", 00:19:36.085 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:36.085 "strip_size_kb": 0, 00:19:36.085 "state": "online", 00:19:36.085 "raid_level": "raid1", 00:19:36.085 "superblock": false, 00:19:36.085 "num_base_bdevs": 2, 00:19:36.085 "num_base_bdevs_discovered": 2, 00:19:36.085 "num_base_bdevs_operational": 2, 00:19:36.085 "process": { 00:19:36.085 "type": "rebuild", 00:19:36.085 "target": "spare", 00:19:36.085 "progress": { 00:19:36.085 "blocks": 22528, 00:19:36.085 "percent": 34 00:19:36.085 } 00:19:36.085 }, 00:19:36.085 "base_bdevs_list": [ 00:19:36.085 { 00:19:36.085 "name": "spare", 00:19:36.085 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:36.085 "is_configured": true, 00:19:36.085 "data_offset": 0, 00:19:36.085 "data_size": 65536 00:19:36.085 }, 00:19:36.085 { 00:19:36.085 "name": "BaseBdev2", 00:19:36.085 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:36.086 "is_configured": true, 00:19:36.086 "data_offset": 0, 00:19:36.086 "data_size": 65536 00:19:36.086 } 00:19:36.086 ] 00:19:36.086 }' 00:19:36.086 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.086 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.086 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.086 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.086 14:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.021 "name": "raid_bdev1", 00:19:37.021 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:37.021 "strip_size_kb": 0, 00:19:37.021 "state": "online", 00:19:37.021 "raid_level": "raid1", 00:19:37.021 "superblock": false, 00:19:37.021 "num_base_bdevs": 2, 00:19:37.021 "num_base_bdevs_discovered": 2, 00:19:37.021 "num_base_bdevs_operational": 2, 00:19:37.021 "process": { 00:19:37.021 "type": "rebuild", 00:19:37.021 "target": "spare", 00:19:37.021 "progress": { 00:19:37.021 "blocks": 47104, 00:19:37.021 "percent": 71 00:19:37.021 } 00:19:37.021 }, 00:19:37.021 "base_bdevs_list": [ 00:19:37.021 { 00:19:37.021 "name": "spare", 00:19:37.021 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:37.021 "is_configured": true, 00:19:37.021 "data_offset": 0, 00:19:37.021 "data_size": 65536 00:19:37.021 }, 00:19:37.021 { 00:19:37.021 "name": "BaseBdev2", 00:19:37.021 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:37.021 "is_configured": true, 00:19:37.021 "data_offset": 0, 00:19:37.021 "data_size": 65536 00:19:37.021 } 00:19:37.021 ] 00:19:37.021 }' 00:19:37.021 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.279 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.279 14:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.279 14:52:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.279 14:52:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.214 [2024-11-04 14:52:07.755221] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:38.214 [2024-11-04 14:52:07.755358] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:38.214 [2024-11-04 14:52:07.755433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.214 "name": "raid_bdev1", 00:19:38.214 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:38.214 "strip_size_kb": 0, 00:19:38.214 "state": "online", 00:19:38.214 "raid_level": "raid1", 00:19:38.214 "superblock": false, 00:19:38.214 "num_base_bdevs": 2, 00:19:38.214 "num_base_bdevs_discovered": 2, 00:19:38.214 "num_base_bdevs_operational": 2, 00:19:38.214 "base_bdevs_list": [ 00:19:38.214 { 00:19:38.214 "name": "spare", 00:19:38.214 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:38.214 "is_configured": true, 00:19:38.214 "data_offset": 0, 00:19:38.214 "data_size": 65536 00:19:38.214 }, 00:19:38.214 { 00:19:38.214 "name": "BaseBdev2", 00:19:38.214 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:38.214 "is_configured": true, 00:19:38.214 "data_offset": 0, 00:19:38.214 "data_size": 65536 00:19:38.214 } 00:19:38.214 ] 00:19:38.214 }' 00:19:38.214 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.472 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.473 "name": "raid_bdev1", 00:19:38.473 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:38.473 "strip_size_kb": 0, 00:19:38.473 "state": "online", 00:19:38.473 "raid_level": "raid1", 00:19:38.473 "superblock": false, 00:19:38.473 "num_base_bdevs": 2, 00:19:38.473 "num_base_bdevs_discovered": 2, 00:19:38.473 "num_base_bdevs_operational": 2, 00:19:38.473 "base_bdevs_list": [ 00:19:38.473 { 00:19:38.473 "name": "spare", 00:19:38.473 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:38.473 "is_configured": true, 00:19:38.473 "data_offset": 0, 00:19:38.473 "data_size": 65536 00:19:38.473 }, 00:19:38.473 { 00:19:38.473 "name": "BaseBdev2", 00:19:38.473 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:38.473 "is_configured": true, 00:19:38.473 "data_offset": 0, 00:19:38.473 "data_size": 65536 00:19:38.473 } 00:19:38.473 ] 00:19:38.473 }' 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.473 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.731 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.731 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.731 "name": "raid_bdev1", 00:19:38.731 "uuid": "08defb10-69c2-4db7-a648-3bb2b9bbfd12", 00:19:38.731 "strip_size_kb": 0, 00:19:38.731 "state": "online", 00:19:38.731 "raid_level": "raid1", 00:19:38.731 "superblock": false, 00:19:38.731 "num_base_bdevs": 2, 00:19:38.731 "num_base_bdevs_discovered": 2, 00:19:38.731 "num_base_bdevs_operational": 2, 00:19:38.731 "base_bdevs_list": [ 00:19:38.731 { 00:19:38.731 "name": "spare", 00:19:38.731 "uuid": "d33a33b5-d29f-5be8-85f7-47b5e46a9140", 00:19:38.731 "is_configured": true, 00:19:38.731 "data_offset": 0, 00:19:38.731 "data_size": 65536 00:19:38.731 }, 00:19:38.731 { 00:19:38.731 "name": "BaseBdev2", 00:19:38.731 "uuid": "6600b57e-0f07-55c7-9dc8-d90595d2c36e", 00:19:38.731 "is_configured": true, 00:19:38.731 "data_offset": 0, 00:19:38.731 "data_size": 65536 00:19:38.731 } 00:19:38.731 ] 00:19:38.731 }' 00:19:38.731 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.731 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.298 [2024-11-04 14:52:08.922246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.298 [2024-11-04 14:52:08.922316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.298 [2024-11-04 14:52:08.922436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.298 [2024-11-04 14:52:08.922541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.298 [2024-11-04 14:52:08.922561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.298 14:52:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:39.556 /dev/nbd0 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.556 1+0 records in 00:19:39.556 1+0 records out 00:19:39.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384167 s, 10.7 MB/s 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:39.556 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.557 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:39.557 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:39.557 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.557 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.557 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:39.815 /dev/nbd1 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.815 1+0 records in 00:19:39.815 1+0 records out 00:19:39.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373007 s, 11.0 MB/s 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.815 14:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.083 14:52:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.354 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75628 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75628 ']' 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75628 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75628 00:19:40.919 killing process with pid 75628 00:19:40.919 Received shutdown signal, test time was about 60.000000 seconds 00:19:40.919 00:19:40.919 Latency(us) 00:19:40.919 [2024-11-04T14:52:10.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.919 [2024-11-04T14:52:10.811Z] =================================================================================================================== 00:19:40.919 [2024-11-04T14:52:10.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75628' 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75628 00:19:40.919 [2024-11-04 14:52:10.618735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.919 14:52:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75628 00:19:41.176 [2024-11-04 14:52:10.920409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:42.549 00:19:42.549 real 0m19.249s 00:19:42.549 user 0m21.502s 00:19:42.549 sys 0m3.827s 00:19:42.549 ************************************ 00:19:42.549 END TEST raid_rebuild_test 00:19:42.549 ************************************ 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.549 14:52:12 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:19:42.549 14:52:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:42.549 14:52:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:42.549 14:52:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.549 ************************************ 00:19:42.549 START TEST raid_rebuild_test_sb 00:19:42.549 ************************************ 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76081 00:19:42.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76081 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 76081 ']' 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.549 14:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.549 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:42.549 Zero copy mechanism will not be used. 00:19:42.549 [2024-11-04 14:52:12.267730] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:19:42.549 [2024-11-04 14:52:12.267917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76081 ] 00:19:42.808 [2024-11-04 14:52:12.452496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.808 [2024-11-04 14:52:12.596836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.065 [2024-11-04 14:52:12.829862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.065 [2024-11-04 14:52:12.829968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 BaseBdev1_malloc 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 [2024-11-04 14:52:13.306015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.632 [2024-11-04 14:52:13.306318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.632 [2024-11-04 14:52:13.306364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:43.632 [2024-11-04 14:52:13.306386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.632 [2024-11-04 14:52:13.309767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.632 [2024-11-04 14:52:13.309820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.632 BaseBdev1 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 BaseBdev2_malloc 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 [2024-11-04 14:52:13.361637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:43.632 [2024-11-04 14:52:13.361866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.632 [2024-11-04 14:52:13.361939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:43.632 [2024-11-04 14:52:13.362075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.632 [2024-11-04 14:52:13.365013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.632 [2024-11-04 14:52:13.365205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:43.632 BaseBdev2 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 spare_malloc 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 spare_delay 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 [2024-11-04 14:52:13.436536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.632 [2024-11-04 14:52:13.436799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.632 [2024-11-04 14:52:13.436844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:43.632 [2024-11-04 14:52:13.436864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.632 [2024-11-04 14:52:13.439950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.632 [2024-11-04 14:52:13.440137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.632 spare 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.632 [2024-11-04 14:52:13.444821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.632 [2024-11-04 14:52:13.447554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.632 [2024-11-04 14:52:13.447815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:43.632 [2024-11-04 14:52:13.447854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:43.632 [2024-11-04 14:52:13.448155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:43.632 [2024-11-04 14:52:13.448453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:43.632 [2024-11-04 14:52:13.448470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:43.632 [2024-11-04 14:52:13.448701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:43.632 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.633 "name": "raid_bdev1", 00:19:43.633 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:43.633 "strip_size_kb": 0, 00:19:43.633 "state": "online", 00:19:43.633 "raid_level": "raid1", 00:19:43.633 "superblock": true, 00:19:43.633 "num_base_bdevs": 2, 00:19:43.633 "num_base_bdevs_discovered": 2, 00:19:43.633 "num_base_bdevs_operational": 2, 00:19:43.633 "base_bdevs_list": [ 00:19:43.633 { 00:19:43.633 "name": "BaseBdev1", 00:19:43.633 "uuid": "cacf1ccf-af38-5be6-ad2b-b52fb1191234", 00:19:43.633 "is_configured": true, 00:19:43.633 "data_offset": 2048, 00:19:43.633 "data_size": 63488 00:19:43.633 }, 00:19:43.633 { 00:19:43.633 "name": "BaseBdev2", 00:19:43.633 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:43.633 "is_configured": true, 00:19:43.633 "data_offset": 2048, 00:19:43.633 "data_size": 63488 00:19:43.633 } 00:19:43.633 ] 00:19:43.633 }' 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.633 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.199 [2024-11-04 14:52:13.937429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.199 14:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.199 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:44.457 [2024-11-04 14:52:14.309221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:44.457 /dev/nbd0 00:19:44.457 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.715 1+0 records in 00:19:44.715 1+0 records out 00:19:44.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503232 s, 8.1 MB/s 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:44.715 14:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:51.276 63488+0 records in 00:19:51.276 63488+0 records out 00:19:51.276 32505856 bytes (33 MB, 31 MiB) copied, 6.42619 s, 5.1 MB/s 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.276 14:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:51.276 [2024-11-04 14:52:21.098146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.276 [2024-11-04 14:52:21.130226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.276 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.533 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.533 "name": "raid_bdev1", 00:19:51.533 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:51.533 "strip_size_kb": 0, 00:19:51.533 "state": "online", 00:19:51.533 "raid_level": "raid1", 00:19:51.533 "superblock": true, 00:19:51.533 "num_base_bdevs": 2, 00:19:51.533 "num_base_bdevs_discovered": 1, 00:19:51.533 "num_base_bdevs_operational": 1, 00:19:51.533 "base_bdevs_list": [ 00:19:51.533 { 00:19:51.533 "name": null, 00:19:51.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.533 "is_configured": false, 00:19:51.533 "data_offset": 0, 00:19:51.533 "data_size": 63488 00:19:51.533 }, 00:19:51.533 { 00:19:51.533 "name": "BaseBdev2", 00:19:51.533 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:51.533 "is_configured": true, 00:19:51.533 "data_offset": 2048, 00:19:51.533 "data_size": 63488 00:19:51.533 } 00:19:51.533 ] 00:19:51.533 }' 00:19:51.533 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.533 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.791 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.791 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 [2024-11-04 14:52:21.654450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.791 [2024-11-04 14:52:21.671290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:19:51.791 14:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.791 14:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:51.791 [2024-11-04 14:52:21.673794] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.164 "name": "raid_bdev1", 00:19:53.164 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:53.164 "strip_size_kb": 0, 00:19:53.164 "state": "online", 00:19:53.164 "raid_level": "raid1", 00:19:53.164 "superblock": true, 00:19:53.164 "num_base_bdevs": 2, 00:19:53.164 "num_base_bdevs_discovered": 2, 00:19:53.164 "num_base_bdevs_operational": 2, 00:19:53.164 "process": { 00:19:53.164 "type": "rebuild", 00:19:53.164 "target": "spare", 00:19:53.164 "progress": { 00:19:53.164 "blocks": 20480, 00:19:53.164 "percent": 32 00:19:53.164 } 00:19:53.164 }, 00:19:53.164 "base_bdevs_list": [ 00:19:53.164 { 00:19:53.164 "name": "spare", 00:19:53.164 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:53.164 "is_configured": true, 00:19:53.164 "data_offset": 2048, 00:19:53.164 "data_size": 63488 00:19:53.164 }, 00:19:53.164 { 00:19:53.164 "name": "BaseBdev2", 00:19:53.164 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:53.164 "is_configured": true, 00:19:53.164 "data_offset": 2048, 00:19:53.164 "data_size": 63488 00:19:53.164 } 00:19:53.164 ] 00:19:53.164 }' 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.164 [2024-11-04 14:52:22.834884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.164 [2024-11-04 14:52:22.882857] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:53.164 [2024-11-04 14:52:22.882946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.164 [2024-11-04 14:52:22.882971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.164 [2024-11-04 14:52:22.882990] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.164 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.165 "name": "raid_bdev1", 00:19:53.165 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:53.165 "strip_size_kb": 0, 00:19:53.165 "state": "online", 00:19:53.165 "raid_level": "raid1", 00:19:53.165 "superblock": true, 00:19:53.165 "num_base_bdevs": 2, 00:19:53.165 "num_base_bdevs_discovered": 1, 00:19:53.165 "num_base_bdevs_operational": 1, 00:19:53.165 "base_bdevs_list": [ 00:19:53.165 { 00:19:53.165 "name": null, 00:19:53.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.165 "is_configured": false, 00:19:53.165 "data_offset": 0, 00:19:53.165 "data_size": 63488 00:19:53.165 }, 00:19:53.165 { 00:19:53.165 "name": "BaseBdev2", 00:19:53.165 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:53.165 "is_configured": true, 00:19:53.165 "data_offset": 2048, 00:19:53.165 "data_size": 63488 00:19:53.165 } 00:19:53.165 ] 00:19:53.165 }' 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.165 14:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.729 "name": "raid_bdev1", 00:19:53.729 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:53.729 "strip_size_kb": 0, 00:19:53.729 "state": "online", 00:19:53.729 "raid_level": "raid1", 00:19:53.729 "superblock": true, 00:19:53.729 "num_base_bdevs": 2, 00:19:53.729 "num_base_bdevs_discovered": 1, 00:19:53.729 "num_base_bdevs_operational": 1, 00:19:53.729 "base_bdevs_list": [ 00:19:53.729 { 00:19:53.729 "name": null, 00:19:53.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.729 "is_configured": false, 00:19:53.729 "data_offset": 0, 00:19:53.729 "data_size": 63488 00:19:53.729 }, 00:19:53.729 { 00:19:53.729 "name": "BaseBdev2", 00:19:53.729 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:53.729 "is_configured": true, 00:19:53.729 "data_offset": 2048, 00:19:53.729 "data_size": 63488 00:19:53.729 } 00:19:53.729 ] 00:19:53.729 }' 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.729 14:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.729 [2024-11-04 14:52:23.611167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.987 [2024-11-04 14:52:23.627383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:19:53.987 14:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.987 14:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:53.987 [2024-11-04 14:52:23.629960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.921 "name": "raid_bdev1", 00:19:54.921 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:54.921 "strip_size_kb": 0, 00:19:54.921 "state": "online", 00:19:54.921 "raid_level": "raid1", 00:19:54.921 "superblock": true, 00:19:54.921 "num_base_bdevs": 2, 00:19:54.921 "num_base_bdevs_discovered": 2, 00:19:54.921 "num_base_bdevs_operational": 2, 00:19:54.921 "process": { 00:19:54.921 "type": "rebuild", 00:19:54.921 "target": "spare", 00:19:54.921 "progress": { 00:19:54.921 "blocks": 20480, 00:19:54.921 "percent": 32 00:19:54.921 } 00:19:54.921 }, 00:19:54.921 "base_bdevs_list": [ 00:19:54.921 { 00:19:54.921 "name": "spare", 00:19:54.921 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:54.921 "is_configured": true, 00:19:54.921 "data_offset": 2048, 00:19:54.921 "data_size": 63488 00:19:54.921 }, 00:19:54.921 { 00:19:54.921 "name": "BaseBdev2", 00:19:54.921 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:54.921 "is_configured": true, 00:19:54.921 "data_offset": 2048, 00:19:54.921 "data_size": 63488 00:19:54.921 } 00:19:54.921 ] 00:19:54.921 }' 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:54.921 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:54.921 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.922 14:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.179 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.179 "name": "raid_bdev1", 00:19:55.179 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:55.179 "strip_size_kb": 0, 00:19:55.179 "state": "online", 00:19:55.179 "raid_level": "raid1", 00:19:55.179 "superblock": true, 00:19:55.179 "num_base_bdevs": 2, 00:19:55.179 "num_base_bdevs_discovered": 2, 00:19:55.179 "num_base_bdevs_operational": 2, 00:19:55.179 "process": { 00:19:55.179 "type": "rebuild", 00:19:55.179 "target": "spare", 00:19:55.179 "progress": { 00:19:55.179 "blocks": 22528, 00:19:55.179 "percent": 35 00:19:55.179 } 00:19:55.179 }, 00:19:55.179 "base_bdevs_list": [ 00:19:55.179 { 00:19:55.179 "name": "spare", 00:19:55.179 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:55.179 "is_configured": true, 00:19:55.179 "data_offset": 2048, 00:19:55.179 "data_size": 63488 00:19:55.179 }, 00:19:55.179 { 00:19:55.179 "name": "BaseBdev2", 00:19:55.179 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:55.179 "is_configured": true, 00:19:55.179 "data_offset": 2048, 00:19:55.179 "data_size": 63488 00:19:55.179 } 00:19:55.179 ] 00:19:55.179 }' 00:19:55.179 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.179 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.179 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.179 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.179 14:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.113 14:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.371 14:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.371 "name": "raid_bdev1", 00:19:56.371 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:56.371 "strip_size_kb": 0, 00:19:56.371 "state": "online", 00:19:56.371 "raid_level": "raid1", 00:19:56.371 "superblock": true, 00:19:56.371 "num_base_bdevs": 2, 00:19:56.371 "num_base_bdevs_discovered": 2, 00:19:56.371 "num_base_bdevs_operational": 2, 00:19:56.371 "process": { 00:19:56.371 "type": "rebuild", 00:19:56.371 "target": "spare", 00:19:56.371 "progress": { 00:19:56.371 "blocks": 47104, 00:19:56.371 "percent": 74 00:19:56.371 } 00:19:56.371 }, 00:19:56.371 "base_bdevs_list": [ 00:19:56.371 { 00:19:56.371 "name": "spare", 00:19:56.371 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:56.371 "is_configured": true, 00:19:56.371 "data_offset": 2048, 00:19:56.371 "data_size": 63488 00:19:56.371 }, 00:19:56.371 { 00:19:56.371 "name": "BaseBdev2", 00:19:56.371 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:56.371 "is_configured": true, 00:19:56.371 "data_offset": 2048, 00:19:56.371 "data_size": 63488 00:19:56.371 } 00:19:56.371 ] 00:19:56.371 }' 00:19:56.371 14:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.371 14:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.371 14:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.371 14:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.371 14:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:56.936 [2024-11-04 14:52:26.754049] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:56.936 [2024-11-04 14:52:26.754168] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:56.936 [2024-11-04 14:52:26.754372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.502 "name": "raid_bdev1", 00:19:57.502 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:57.502 "strip_size_kb": 0, 00:19:57.502 "state": "online", 00:19:57.502 "raid_level": "raid1", 00:19:57.502 "superblock": true, 00:19:57.502 "num_base_bdevs": 2, 00:19:57.502 "num_base_bdevs_discovered": 2, 00:19:57.502 "num_base_bdevs_operational": 2, 00:19:57.502 "base_bdevs_list": [ 00:19:57.502 { 00:19:57.502 "name": "spare", 00:19:57.502 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:57.502 "is_configured": true, 00:19:57.502 "data_offset": 2048, 00:19:57.502 "data_size": 63488 00:19:57.502 }, 00:19:57.502 { 00:19:57.502 "name": "BaseBdev2", 00:19:57.502 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:57.502 "is_configured": true, 00:19:57.502 "data_offset": 2048, 00:19:57.502 "data_size": 63488 00:19:57.502 } 00:19:57.502 ] 00:19:57.502 }' 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.502 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.503 "name": "raid_bdev1", 00:19:57.503 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:57.503 "strip_size_kb": 0, 00:19:57.503 "state": "online", 00:19:57.503 "raid_level": "raid1", 00:19:57.503 "superblock": true, 00:19:57.503 "num_base_bdevs": 2, 00:19:57.503 "num_base_bdevs_discovered": 2, 00:19:57.503 "num_base_bdevs_operational": 2, 00:19:57.503 "base_bdevs_list": [ 00:19:57.503 { 00:19:57.503 "name": "spare", 00:19:57.503 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:57.503 "is_configured": true, 00:19:57.503 "data_offset": 2048, 00:19:57.503 "data_size": 63488 00:19:57.503 }, 00:19:57.503 { 00:19:57.503 "name": "BaseBdev2", 00:19:57.503 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:57.503 "is_configured": true, 00:19:57.503 "data_offset": 2048, 00:19:57.503 "data_size": 63488 00:19:57.503 } 00:19:57.503 ] 00:19:57.503 }' 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.503 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.761 "name": "raid_bdev1", 00:19:57.761 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:57.761 "strip_size_kb": 0, 00:19:57.761 "state": "online", 00:19:57.761 "raid_level": "raid1", 00:19:57.761 "superblock": true, 00:19:57.761 "num_base_bdevs": 2, 00:19:57.761 "num_base_bdevs_discovered": 2, 00:19:57.761 "num_base_bdevs_operational": 2, 00:19:57.761 "base_bdevs_list": [ 00:19:57.761 { 00:19:57.761 "name": "spare", 00:19:57.761 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:57.761 "is_configured": true, 00:19:57.761 "data_offset": 2048, 00:19:57.761 "data_size": 63488 00:19:57.761 }, 00:19:57.761 { 00:19:57.761 "name": "BaseBdev2", 00:19:57.761 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:57.761 "is_configured": true, 00:19:57.761 "data_offset": 2048, 00:19:57.761 "data_size": 63488 00:19:57.761 } 00:19:57.761 ] 00:19:57.761 }' 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.761 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.064 [2024-11-04 14:52:27.906958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.064 [2024-11-04 14:52:27.907358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.064 [2024-11-04 14:52:27.907495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.064 [2024-11-04 14:52:27.907592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.064 [2024-11-04 14:52:27.907610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.064 14:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.322 14:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:58.580 /dev/nbd0 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.580 1+0 records in 00:19:58.580 1+0 records out 00:19:58.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332718 s, 12.3 MB/s 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.580 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:58.841 /dev/nbd1 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.841 1+0 records in 00:19:58.841 1+0 records out 00:19:58.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382665 s, 10.7 MB/s 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.841 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.098 14:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.357 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.616 [2024-11-04 14:52:29.403751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:59.616 [2024-11-04 14:52:29.403829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.616 [2024-11-04 14:52:29.403864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:59.616 [2024-11-04 14:52:29.403880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.616 [2024-11-04 14:52:29.407376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.616 [2024-11-04 14:52:29.407432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:59.616 [2024-11-04 14:52:29.407591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:59.616 [2024-11-04 14:52:29.407655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.616 [2024-11-04 14:52:29.407870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.616 spare 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.616 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.876 [2024-11-04 14:52:29.508079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:59.876 [2024-11-04 14:52:29.508138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:59.876 [2024-11-04 14:52:29.508610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:19:59.876 [2024-11-04 14:52:29.508850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:59.876 [2024-11-04 14:52:29.508872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:59.876 [2024-11-04 14:52:29.509096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.876 "name": "raid_bdev1", 00:19:59.876 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:19:59.876 "strip_size_kb": 0, 00:19:59.876 "state": "online", 00:19:59.876 "raid_level": "raid1", 00:19:59.876 "superblock": true, 00:19:59.876 "num_base_bdevs": 2, 00:19:59.876 "num_base_bdevs_discovered": 2, 00:19:59.876 "num_base_bdevs_operational": 2, 00:19:59.876 "base_bdevs_list": [ 00:19:59.876 { 00:19:59.876 "name": "spare", 00:19:59.876 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:19:59.876 "is_configured": true, 00:19:59.876 "data_offset": 2048, 00:19:59.876 "data_size": 63488 00:19:59.876 }, 00:19:59.876 { 00:19:59.876 "name": "BaseBdev2", 00:19:59.876 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:19:59.876 "is_configured": true, 00:19:59.876 "data_offset": 2048, 00:19:59.876 "data_size": 63488 00:19:59.876 } 00:19:59.876 ] 00:19:59.876 }' 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.876 14:52:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.443 "name": "raid_bdev1", 00:20:00.443 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:00.443 "strip_size_kb": 0, 00:20:00.443 "state": "online", 00:20:00.443 "raid_level": "raid1", 00:20:00.443 "superblock": true, 00:20:00.443 "num_base_bdevs": 2, 00:20:00.443 "num_base_bdevs_discovered": 2, 00:20:00.443 "num_base_bdevs_operational": 2, 00:20:00.443 "base_bdevs_list": [ 00:20:00.443 { 00:20:00.443 "name": "spare", 00:20:00.443 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:20:00.443 "is_configured": true, 00:20:00.443 "data_offset": 2048, 00:20:00.443 "data_size": 63488 00:20:00.443 }, 00:20:00.443 { 00:20:00.443 "name": "BaseBdev2", 00:20:00.443 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:00.443 "is_configured": true, 00:20:00.443 "data_offset": 2048, 00:20:00.443 "data_size": 63488 00:20:00.443 } 00:20:00.443 ] 00:20:00.443 }' 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 [2024-11-04 14:52:30.308358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.701 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.701 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.701 "name": "raid_bdev1", 00:20:00.701 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:00.701 "strip_size_kb": 0, 00:20:00.701 "state": "online", 00:20:00.701 "raid_level": "raid1", 00:20:00.701 "superblock": true, 00:20:00.701 "num_base_bdevs": 2, 00:20:00.701 "num_base_bdevs_discovered": 1, 00:20:00.701 "num_base_bdevs_operational": 1, 00:20:00.701 "base_bdevs_list": [ 00:20:00.701 { 00:20:00.701 "name": null, 00:20:00.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.701 "is_configured": false, 00:20:00.701 "data_offset": 0, 00:20:00.701 "data_size": 63488 00:20:00.701 }, 00:20:00.701 { 00:20:00.701 "name": "BaseBdev2", 00:20:00.701 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:00.701 "is_configured": true, 00:20:00.701 "data_offset": 2048, 00:20:00.701 "data_size": 63488 00:20:00.701 } 00:20:00.701 ] 00:20:00.701 }' 00:20:00.701 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.701 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.988 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:00.988 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.988 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.988 [2024-11-04 14:52:30.844774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.988 [2024-11-04 14:52:30.845057] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:00.988 [2024-11-04 14:52:30.845085] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:00.988 [2024-11-04 14:52:30.845142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.988 [2024-11-04 14:52:30.862806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:20:00.988 14:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.988 14:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:00.988 [2024-11-04 14:52:30.865751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.362 "name": "raid_bdev1", 00:20:02.362 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:02.362 "strip_size_kb": 0, 00:20:02.362 "state": "online", 00:20:02.362 "raid_level": "raid1", 00:20:02.362 "superblock": true, 00:20:02.362 "num_base_bdevs": 2, 00:20:02.362 "num_base_bdevs_discovered": 2, 00:20:02.362 "num_base_bdevs_operational": 2, 00:20:02.362 "process": { 00:20:02.362 "type": "rebuild", 00:20:02.362 "target": "spare", 00:20:02.362 "progress": { 00:20:02.362 "blocks": 20480, 00:20:02.362 "percent": 32 00:20:02.362 } 00:20:02.362 }, 00:20:02.362 "base_bdevs_list": [ 00:20:02.362 { 00:20:02.362 "name": "spare", 00:20:02.362 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:20:02.362 "is_configured": true, 00:20:02.362 "data_offset": 2048, 00:20:02.362 "data_size": 63488 00:20:02.362 }, 00:20:02.362 { 00:20:02.362 "name": "BaseBdev2", 00:20:02.362 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:02.362 "is_configured": true, 00:20:02.362 "data_offset": 2048, 00:20:02.362 "data_size": 63488 00:20:02.362 } 00:20:02.362 ] 00:20:02.362 }' 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.362 14:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.362 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.362 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:02.362 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.362 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.362 [2024-11-04 14:52:32.035792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.362 [2024-11-04 14:52:32.075631] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:02.362 [2024-11-04 14:52:32.075763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.362 [2024-11-04 14:52:32.075802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.362 [2024-11-04 14:52:32.075817] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.363 "name": "raid_bdev1", 00:20:02.363 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:02.363 "strip_size_kb": 0, 00:20:02.363 "state": "online", 00:20:02.363 "raid_level": "raid1", 00:20:02.363 "superblock": true, 00:20:02.363 "num_base_bdevs": 2, 00:20:02.363 "num_base_bdevs_discovered": 1, 00:20:02.363 "num_base_bdevs_operational": 1, 00:20:02.363 "base_bdevs_list": [ 00:20:02.363 { 00:20:02.363 "name": null, 00:20:02.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.363 "is_configured": false, 00:20:02.363 "data_offset": 0, 00:20:02.363 "data_size": 63488 00:20:02.363 }, 00:20:02.363 { 00:20:02.363 "name": "BaseBdev2", 00:20:02.363 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:02.363 "is_configured": true, 00:20:02.363 "data_offset": 2048, 00:20:02.363 "data_size": 63488 00:20:02.363 } 00:20:02.363 ] 00:20:02.363 }' 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.363 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.930 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:02.930 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.930 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.930 [2024-11-04 14:52:32.613964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:02.930 [2024-11-04 14:52:32.614068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.930 [2024-11-04 14:52:32.614107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:02.930 [2024-11-04 14:52:32.614137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.930 [2024-11-04 14:52:32.614827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.930 [2024-11-04 14:52:32.614883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:02.930 [2024-11-04 14:52:32.615015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:02.930 [2024-11-04 14:52:32.615041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:02.930 [2024-11-04 14:52:32.615055] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:02.930 [2024-11-04 14:52:32.615092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:02.930 [2024-11-04 14:52:32.631657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:20:02.930 spare 00:20:02.930 14:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.930 14:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:02.930 [2024-11-04 14:52:32.634432] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.865 "name": "raid_bdev1", 00:20:03.865 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:03.865 "strip_size_kb": 0, 00:20:03.865 "state": "online", 00:20:03.865 "raid_level": "raid1", 00:20:03.865 "superblock": true, 00:20:03.865 "num_base_bdevs": 2, 00:20:03.865 "num_base_bdevs_discovered": 2, 00:20:03.865 "num_base_bdevs_operational": 2, 00:20:03.865 "process": { 00:20:03.865 "type": "rebuild", 00:20:03.865 "target": "spare", 00:20:03.865 "progress": { 00:20:03.865 "blocks": 20480, 00:20:03.865 "percent": 32 00:20:03.865 } 00:20:03.865 }, 00:20:03.865 "base_bdevs_list": [ 00:20:03.865 { 00:20:03.865 "name": "spare", 00:20:03.865 "uuid": "f29841f4-027a-53af-aa09-454b32b84850", 00:20:03.865 "is_configured": true, 00:20:03.865 "data_offset": 2048, 00:20:03.865 "data_size": 63488 00:20:03.865 }, 00:20:03.865 { 00:20:03.865 "name": "BaseBdev2", 00:20:03.865 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:03.865 "is_configured": true, 00:20:03.865 "data_offset": 2048, 00:20:03.865 "data_size": 63488 00:20:03.865 } 00:20:03.865 ] 00:20:03.865 }' 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:03.865 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.124 [2024-11-04 14:52:33.799742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.124 [2024-11-04 14:52:33.843650] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:04.124 [2024-11-04 14:52:33.843903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.124 [2024-11-04 14:52:33.844058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.124 [2024-11-04 14:52:33.844113] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.124 "name": "raid_bdev1", 00:20:04.124 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:04.124 "strip_size_kb": 0, 00:20:04.124 "state": "online", 00:20:04.124 "raid_level": "raid1", 00:20:04.124 "superblock": true, 00:20:04.124 "num_base_bdevs": 2, 00:20:04.124 "num_base_bdevs_discovered": 1, 00:20:04.124 "num_base_bdevs_operational": 1, 00:20:04.124 "base_bdevs_list": [ 00:20:04.124 { 00:20:04.124 "name": null, 00:20:04.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.124 "is_configured": false, 00:20:04.124 "data_offset": 0, 00:20:04.124 "data_size": 63488 00:20:04.124 }, 00:20:04.124 { 00:20:04.124 "name": "BaseBdev2", 00:20:04.124 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:04.124 "is_configured": true, 00:20:04.124 "data_offset": 2048, 00:20:04.124 "data_size": 63488 00:20:04.124 } 00:20:04.124 ] 00:20:04.124 }' 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.124 14:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.693 "name": "raid_bdev1", 00:20:04.693 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:04.693 "strip_size_kb": 0, 00:20:04.693 "state": "online", 00:20:04.693 "raid_level": "raid1", 00:20:04.693 "superblock": true, 00:20:04.693 "num_base_bdevs": 2, 00:20:04.693 "num_base_bdevs_discovered": 1, 00:20:04.693 "num_base_bdevs_operational": 1, 00:20:04.693 "base_bdevs_list": [ 00:20:04.693 { 00:20:04.693 "name": null, 00:20:04.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.693 "is_configured": false, 00:20:04.693 "data_offset": 0, 00:20:04.693 "data_size": 63488 00:20:04.693 }, 00:20:04.693 { 00:20:04.693 "name": "BaseBdev2", 00:20:04.693 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:04.693 "is_configured": true, 00:20:04.693 "data_offset": 2048, 00:20:04.693 "data_size": 63488 00:20:04.693 } 00:20:04.693 ] 00:20:04.693 }' 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.693 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.952 [2024-11-04 14:52:34.584791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.952 [2024-11-04 14:52:34.585109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.952 [2024-11-04 14:52:34.585159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:04.952 [2024-11-04 14:52:34.585190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.952 [2024-11-04 14:52:34.585836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.952 [2024-11-04 14:52:34.585863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.952 [2024-11-04 14:52:34.585980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:04.952 [2024-11-04 14:52:34.586002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.952 [2024-11-04 14:52:34.586017] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:04.952 [2024-11-04 14:52:34.586063] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:04.952 BaseBdev1 00:20:04.952 14:52:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.952 14:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.887 "name": "raid_bdev1", 00:20:05.887 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:05.887 "strip_size_kb": 0, 00:20:05.887 "state": "online", 00:20:05.887 "raid_level": "raid1", 00:20:05.887 "superblock": true, 00:20:05.887 "num_base_bdevs": 2, 00:20:05.887 "num_base_bdevs_discovered": 1, 00:20:05.887 "num_base_bdevs_operational": 1, 00:20:05.887 "base_bdevs_list": [ 00:20:05.887 { 00:20:05.887 "name": null, 00:20:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.887 "is_configured": false, 00:20:05.887 "data_offset": 0, 00:20:05.887 "data_size": 63488 00:20:05.887 }, 00:20:05.887 { 00:20:05.887 "name": "BaseBdev2", 00:20:05.887 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:05.887 "is_configured": true, 00:20:05.887 "data_offset": 2048, 00:20:05.887 "data_size": 63488 00:20:05.887 } 00:20:05.887 ] 00:20:05.887 }' 00:20:05.887 14:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.888 14:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.453 "name": "raid_bdev1", 00:20:06.453 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:06.453 "strip_size_kb": 0, 00:20:06.453 "state": "online", 00:20:06.453 "raid_level": "raid1", 00:20:06.453 "superblock": true, 00:20:06.453 "num_base_bdevs": 2, 00:20:06.453 "num_base_bdevs_discovered": 1, 00:20:06.453 "num_base_bdevs_operational": 1, 00:20:06.453 "base_bdevs_list": [ 00:20:06.453 { 00:20:06.453 "name": null, 00:20:06.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.453 "is_configured": false, 00:20:06.453 "data_offset": 0, 00:20:06.453 "data_size": 63488 00:20:06.453 }, 00:20:06.453 { 00:20:06.453 "name": "BaseBdev2", 00:20:06.453 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:06.453 "is_configured": true, 00:20:06.453 "data_offset": 2048, 00:20:06.453 "data_size": 63488 00:20:06.453 } 00:20:06.453 ] 00:20:06.453 }' 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.453 [2024-11-04 14:52:36.265550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.453 [2024-11-04 14:52:36.266009] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:06.453 [2024-11-04 14:52:36.266044] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:06.453 request: 00:20:06.453 { 00:20:06.453 "base_bdev": "BaseBdev1", 00:20:06.453 "raid_bdev": "raid_bdev1", 00:20:06.453 "method": "bdev_raid_add_base_bdev", 00:20:06.453 "req_id": 1 00:20:06.453 } 00:20:06.453 Got JSON-RPC error response 00:20:06.453 response: 00:20:06.453 { 00:20:06.453 "code": -22, 00:20:06.453 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:06.453 } 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.453 14:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:07.389 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.659 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.660 "name": "raid_bdev1", 00:20:07.660 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:07.660 "strip_size_kb": 0, 00:20:07.660 "state": "online", 00:20:07.660 "raid_level": "raid1", 00:20:07.660 "superblock": true, 00:20:07.660 "num_base_bdevs": 2, 00:20:07.660 "num_base_bdevs_discovered": 1, 00:20:07.660 "num_base_bdevs_operational": 1, 00:20:07.660 "base_bdevs_list": [ 00:20:07.660 { 00:20:07.660 "name": null, 00:20:07.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.660 "is_configured": false, 00:20:07.660 "data_offset": 0, 00:20:07.660 "data_size": 63488 00:20:07.660 }, 00:20:07.660 { 00:20:07.660 "name": "BaseBdev2", 00:20:07.660 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:07.660 "is_configured": true, 00:20:07.660 "data_offset": 2048, 00:20:07.660 "data_size": 63488 00:20:07.660 } 00:20:07.660 ] 00:20:07.660 }' 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.660 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.918 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.176 "name": "raid_bdev1", 00:20:08.176 "uuid": "a80e9e82-9b60-403f-85a5-a7aae6276056", 00:20:08.176 "strip_size_kb": 0, 00:20:08.176 "state": "online", 00:20:08.176 "raid_level": "raid1", 00:20:08.176 "superblock": true, 00:20:08.176 "num_base_bdevs": 2, 00:20:08.176 "num_base_bdevs_discovered": 1, 00:20:08.176 "num_base_bdevs_operational": 1, 00:20:08.176 "base_bdevs_list": [ 00:20:08.176 { 00:20:08.176 "name": null, 00:20:08.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.176 "is_configured": false, 00:20:08.176 "data_offset": 0, 00:20:08.176 "data_size": 63488 00:20:08.176 }, 00:20:08.176 { 00:20:08.176 "name": "BaseBdev2", 00:20:08.176 "uuid": "708078d6-bdf3-554d-9017-1c1b61e4ae17", 00:20:08.176 "is_configured": true, 00:20:08.176 "data_offset": 2048, 00:20:08.176 "data_size": 63488 00:20:08.176 } 00:20:08.176 ] 00:20:08.176 }' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76081 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 76081 ']' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 76081 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76081 00:20:08.176 killing process with pid 76081 00:20:08.176 Received shutdown signal, test time was about 60.000000 seconds 00:20:08.176 00:20:08.176 Latency(us) 00:20:08.176 [2024-11-04T14:52:38.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.176 [2024-11-04T14:52:38.068Z] =================================================================================================================== 00:20:08.176 [2024-11-04T14:52:38.068Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76081' 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 76081 00:20:08.176 [2024-11-04 14:52:37.957403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.176 14:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 76081 00:20:08.176 [2024-11-04 14:52:37.957578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.176 [2024-11-04 14:52:37.957661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.177 [2024-11-04 14:52:37.957682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:08.435 [2024-11-04 14:52:38.232999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.811 ************************************ 00:20:09.811 END TEST raid_rebuild_test_sb 00:20:09.811 ************************************ 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:09.811 00:20:09.811 real 0m27.138s 00:20:09.811 user 0m33.038s 00:20:09.811 sys 0m4.143s 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.811 14:52:39 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:20:09.811 14:52:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:09.811 14:52:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:09.811 14:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:09.811 ************************************ 00:20:09.811 START TEST raid_rebuild_test_io 00:20:09.811 ************************************ 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76844 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76844 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76844 ']' 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.811 14:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.811 [2024-11-04 14:52:39.481762] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:20:09.811 [2024-11-04 14:52:39.483056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76844 ] 00:20:09.811 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:09.811 Zero copy mechanism will not be used. 00:20:09.811 [2024-11-04 14:52:39.668865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.069 [2024-11-04 14:52:39.806042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.326 [2024-11-04 14:52:40.018617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.326 [2024-11-04 14:52:40.018888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.583 BaseBdev1_malloc 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.583 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 [2024-11-04 14:52:40.478715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:10.841 [2024-11-04 14:52:40.478815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.841 [2024-11-04 14:52:40.478849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:10.841 [2024-11-04 14:52:40.478868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.841 [2024-11-04 14:52:40.481661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.841 [2024-11-04 14:52:40.481722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:10.841 BaseBdev1 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 BaseBdev2_malloc 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 [2024-11-04 14:52:40.531772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:10.841 [2024-11-04 14:52:40.531870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.841 [2024-11-04 14:52:40.531899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:10.841 [2024-11-04 14:52:40.531919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.841 [2024-11-04 14:52:40.534823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.841 [2024-11-04 14:52:40.534877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:10.841 BaseBdev2 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 spare_malloc 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 spare_delay 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 [2024-11-04 14:52:40.605203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:10.841 [2024-11-04 14:52:40.605502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.841 [2024-11-04 14:52:40.605544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:10.841 [2024-11-04 14:52:40.605564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.841 [2024-11-04 14:52:40.608414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.841 [2024-11-04 14:52:40.608468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:10.841 spare 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.841 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.841 [2024-11-04 14:52:40.613339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.841 [2024-11-04 14:52:40.615767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.842 [2024-11-04 14:52:40.616036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:10.842 [2024-11-04 14:52:40.616067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:10.842 [2024-11-04 14:52:40.616409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:10.842 [2024-11-04 14:52:40.616613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:10.842 [2024-11-04 14:52:40.616633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:10.842 [2024-11-04 14:52:40.616833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.842 "name": "raid_bdev1", 00:20:10.842 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:10.842 "strip_size_kb": 0, 00:20:10.842 "state": "online", 00:20:10.842 "raid_level": "raid1", 00:20:10.842 "superblock": false, 00:20:10.842 "num_base_bdevs": 2, 00:20:10.842 "num_base_bdevs_discovered": 2, 00:20:10.842 "num_base_bdevs_operational": 2, 00:20:10.842 "base_bdevs_list": [ 00:20:10.842 { 00:20:10.842 "name": "BaseBdev1", 00:20:10.842 "uuid": "cd20ae08-d797-5706-afd4-6e4ca315e598", 00:20:10.842 "is_configured": true, 00:20:10.842 "data_offset": 0, 00:20:10.842 "data_size": 65536 00:20:10.842 }, 00:20:10.842 { 00:20:10.842 "name": "BaseBdev2", 00:20:10.842 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:10.842 "is_configured": true, 00:20:10.842 "data_offset": 0, 00:20:10.842 "data_size": 65536 00:20:10.842 } 00:20:10.842 ] 00:20:10.842 }' 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.842 14:52:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.429 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.429 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:11.429 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.429 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.429 [2024-11-04 14:52:41.129940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.430 [2024-11-04 14:52:41.229576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.430 "name": "raid_bdev1", 00:20:11.430 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:11.430 "strip_size_kb": 0, 00:20:11.430 "state": "online", 00:20:11.430 "raid_level": "raid1", 00:20:11.430 "superblock": false, 00:20:11.430 "num_base_bdevs": 2, 00:20:11.430 "num_base_bdevs_discovered": 1, 00:20:11.430 "num_base_bdevs_operational": 1, 00:20:11.430 "base_bdevs_list": [ 00:20:11.430 { 00:20:11.430 "name": null, 00:20:11.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.430 "is_configured": false, 00:20:11.430 "data_offset": 0, 00:20:11.430 "data_size": 65536 00:20:11.430 }, 00:20:11.430 { 00:20:11.430 "name": "BaseBdev2", 00:20:11.430 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:11.430 "is_configured": true, 00:20:11.430 "data_offset": 0, 00:20:11.430 "data_size": 65536 00:20:11.430 } 00:20:11.430 ] 00:20:11.430 }' 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.430 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.688 [2024-11-04 14:52:41.337959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:11.688 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:11.688 Zero copy mechanism will not be used. 00:20:11.688 Running I/O for 60 seconds... 00:20:11.946 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:11.946 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.946 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.946 [2024-11-04 14:52:41.728870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.946 14:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.946 14:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:11.946 [2024-11-04 14:52:41.799643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:11.946 [2024-11-04 14:52:41.802437] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:12.204 [2024-11-04 14:52:41.905512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:12.204 [2024-11-04 14:52:41.906299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:12.204 [2024-11-04 14:52:42.026051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:12.204 [2024-11-04 14:52:42.026617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:12.720 157.00 IOPS, 471.00 MiB/s [2024-11-04T14:52:42.612Z] [2024-11-04 14:52:42.362725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.978 [2024-11-04 14:52:42.823440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:12.978 [2024-11-04 14:52:42.832413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.978 "name": "raid_bdev1", 00:20:12.978 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:12.978 "strip_size_kb": 0, 00:20:12.978 "state": "online", 00:20:12.978 "raid_level": "raid1", 00:20:12.978 "superblock": false, 00:20:12.978 "num_base_bdevs": 2, 00:20:12.978 "num_base_bdevs_discovered": 2, 00:20:12.978 "num_base_bdevs_operational": 2, 00:20:12.978 "process": { 00:20:12.978 "type": "rebuild", 00:20:12.978 "target": "spare", 00:20:12.978 "progress": { 00:20:12.978 "blocks": 12288, 00:20:12.978 "percent": 18 00:20:12.978 } 00:20:12.978 }, 00:20:12.978 "base_bdevs_list": [ 00:20:12.978 { 00:20:12.978 "name": "spare", 00:20:12.978 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:12.978 "is_configured": true, 00:20:12.978 "data_offset": 0, 00:20:12.978 "data_size": 65536 00:20:12.978 }, 00:20:12.978 { 00:20:12.978 "name": "BaseBdev2", 00:20:12.978 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:12.978 "is_configured": true, 00:20:12.978 "data_offset": 0, 00:20:12.978 "data_size": 65536 00:20:12.978 } 00:20:12.978 ] 00:20:12.978 }' 00:20:12.978 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.236 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.236 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.236 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.236 14:52:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:13.236 14:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.236 14:52:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.236 [2024-11-04 14:52:42.965219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.236 [2024-11-04 14:52:43.035150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:13.236 [2024-11-04 14:52:43.035791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:13.494 [2024-11-04 14:52:43.137956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.494 [2024-11-04 14:52:43.149216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.494 [2024-11-04 14:52:43.149433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.494 [2024-11-04 14:52:43.149493] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:13.494 [2024-11-04 14:52:43.187780] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.494 "name": "raid_bdev1", 00:20:13.494 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:13.494 "strip_size_kb": 0, 00:20:13.494 "state": "online", 00:20:13.494 "raid_level": "raid1", 00:20:13.494 "superblock": false, 00:20:13.494 "num_base_bdevs": 2, 00:20:13.494 "num_base_bdevs_discovered": 1, 00:20:13.494 "num_base_bdevs_operational": 1, 00:20:13.494 "base_bdevs_list": [ 00:20:13.494 { 00:20:13.494 "name": null, 00:20:13.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.494 "is_configured": false, 00:20:13.494 "data_offset": 0, 00:20:13.494 "data_size": 65536 00:20:13.494 }, 00:20:13.494 { 00:20:13.494 "name": "BaseBdev2", 00:20:13.494 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:13.494 "is_configured": true, 00:20:13.494 "data_offset": 0, 00:20:13.494 "data_size": 65536 00:20:13.494 } 00:20:13.494 ] 00:20:13.494 }' 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.494 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.059 144.00 IOPS, 432.00 MiB/s [2024-11-04T14:52:43.951Z] 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.059 "name": "raid_bdev1", 00:20:14.059 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:14.059 "strip_size_kb": 0, 00:20:14.059 "state": "online", 00:20:14.059 "raid_level": "raid1", 00:20:14.059 "superblock": false, 00:20:14.059 "num_base_bdevs": 2, 00:20:14.059 "num_base_bdevs_discovered": 1, 00:20:14.059 "num_base_bdevs_operational": 1, 00:20:14.059 "base_bdevs_list": [ 00:20:14.059 { 00:20:14.059 "name": null, 00:20:14.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.059 "is_configured": false, 00:20:14.059 "data_offset": 0, 00:20:14.059 "data_size": 65536 00:20:14.059 }, 00:20:14.059 { 00:20:14.059 "name": "BaseBdev2", 00:20:14.059 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:14.059 "is_configured": true, 00:20:14.059 "data_offset": 0, 00:20:14.059 "data_size": 65536 00:20:14.059 } 00:20:14.059 ] 00:20:14.059 }' 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:14.059 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.060 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.060 [2024-11-04 14:52:43.864514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.060 14:52:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.060 14:52:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:14.317 [2024-11-04 14:52:43.954947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:14.317 [2024-11-04 14:52:43.957907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.317 [2024-11-04 14:52:44.085711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:14.317 [2024-11-04 14:52:44.086350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:14.575 [2024-11-04 14:52:44.214861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:14.575 [2024-11-04 14:52:44.215255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:14.833 158.33 IOPS, 475.00 MiB/s [2024-11-04T14:52:44.725Z] [2024-11-04 14:52:44.566814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:15.092 [2024-11-04 14:52:44.803836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:15.092 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.092 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.092 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.093 "name": "raid_bdev1", 00:20:15.093 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:15.093 "strip_size_kb": 0, 00:20:15.093 "state": "online", 00:20:15.093 "raid_level": "raid1", 00:20:15.093 "superblock": false, 00:20:15.093 "num_base_bdevs": 2, 00:20:15.093 "num_base_bdevs_discovered": 2, 00:20:15.093 "num_base_bdevs_operational": 2, 00:20:15.093 "process": { 00:20:15.093 "type": "rebuild", 00:20:15.093 "target": "spare", 00:20:15.093 "progress": { 00:20:15.093 "blocks": 10240, 00:20:15.093 "percent": 15 00:20:15.093 } 00:20:15.093 }, 00:20:15.093 "base_bdevs_list": [ 00:20:15.093 { 00:20:15.093 "name": "spare", 00:20:15.093 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:15.093 "is_configured": true, 00:20:15.093 "data_offset": 0, 00:20:15.093 "data_size": 65536 00:20:15.093 }, 00:20:15.093 { 00:20:15.093 "name": "BaseBdev2", 00:20:15.093 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:15.093 "is_configured": true, 00:20:15.093 "data_offset": 0, 00:20:15.093 "data_size": 65536 00:20:15.093 } 00:20:15.093 ] 00:20:15.093 }' 00:20:15.093 14:52:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.351 "name": "raid_bdev1", 00:20:15.351 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:15.351 "strip_size_kb": 0, 00:20:15.351 "state": "online", 00:20:15.351 "raid_level": "raid1", 00:20:15.351 "superblock": false, 00:20:15.351 "num_base_bdevs": 2, 00:20:15.351 "num_base_bdevs_discovered": 2, 00:20:15.351 "num_base_bdevs_operational": 2, 00:20:15.351 "process": { 00:20:15.351 "type": "rebuild", 00:20:15.351 "target": "spare", 00:20:15.351 "progress": { 00:20:15.351 "blocks": 12288, 00:20:15.351 "percent": 18 00:20:15.351 } 00:20:15.351 }, 00:20:15.351 "base_bdevs_list": [ 00:20:15.351 { 00:20:15.351 "name": "spare", 00:20:15.351 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:15.351 "is_configured": true, 00:20:15.351 "data_offset": 0, 00:20:15.351 "data_size": 65536 00:20:15.351 }, 00:20:15.351 { 00:20:15.351 "name": "BaseBdev2", 00:20:15.351 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:15.351 "is_configured": true, 00:20:15.351 "data_offset": 0, 00:20:15.351 "data_size": 65536 00:20:15.351 } 00:20:15.351 ] 00:20:15.351 }' 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.351 [2024-11-04 14:52:45.155833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.351 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.609 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.609 14:52:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:15.609 [2024-11-04 14:52:45.284417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:15.609 [2024-11-04 14:52:45.284912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:15.867 139.75 IOPS, 419.25 MiB/s [2024-11-04T14:52:45.759Z] [2024-11-04 14:52:45.620496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:16.434 [2024-11-04 14:52:46.076718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:16.434 [2024-11-04 14:52:46.213027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.434 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.434 "name": "raid_bdev1", 00:20:16.434 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:16.434 "strip_size_kb": 0, 00:20:16.434 "state": "online", 00:20:16.434 "raid_level": "raid1", 00:20:16.434 "superblock": false, 00:20:16.434 "num_base_bdevs": 2, 00:20:16.434 "num_base_bdevs_discovered": 2, 00:20:16.434 "num_base_bdevs_operational": 2, 00:20:16.434 "process": { 00:20:16.434 "type": "rebuild", 00:20:16.434 "target": "spare", 00:20:16.434 "progress": { 00:20:16.434 "blocks": 28672, 00:20:16.434 "percent": 43 00:20:16.434 } 00:20:16.434 }, 00:20:16.434 "base_bdevs_list": [ 00:20:16.434 { 00:20:16.434 "name": "spare", 00:20:16.434 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:16.434 "is_configured": true, 00:20:16.434 "data_offset": 0, 00:20:16.434 "data_size": 65536 00:20:16.434 }, 00:20:16.434 { 00:20:16.434 "name": "BaseBdev2", 00:20:16.435 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:16.435 "is_configured": true, 00:20:16.435 "data_offset": 0, 00:20:16.435 "data_size": 65536 00:20:16.435 } 00:20:16.435 ] 00:20:16.435 }' 00:20:16.435 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.693 120.40 IOPS, 361.20 MiB/s [2024-11-04T14:52:46.585Z] 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.693 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.693 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.693 14:52:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.953 [2024-11-04 14:52:46.586966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:16.953 [2024-11-04 14:52:46.820505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:17.779 107.17 IOPS, 321.50 MiB/s [2024-11-04T14:52:47.671Z] 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.779 "name": "raid_bdev1", 00:20:17.779 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:17.779 "strip_size_kb": 0, 00:20:17.779 "state": "online", 00:20:17.779 "raid_level": "raid1", 00:20:17.779 "superblock": false, 00:20:17.779 "num_base_bdevs": 2, 00:20:17.779 "num_base_bdevs_discovered": 2, 00:20:17.779 "num_base_bdevs_operational": 2, 00:20:17.779 "process": { 00:20:17.779 "type": "rebuild", 00:20:17.779 "target": "spare", 00:20:17.779 "progress": { 00:20:17.779 "blocks": 45056, 00:20:17.779 "percent": 68 00:20:17.779 } 00:20:17.779 }, 00:20:17.779 "base_bdevs_list": [ 00:20:17.779 { 00:20:17.779 "name": "spare", 00:20:17.779 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:17.779 "is_configured": true, 00:20:17.779 "data_offset": 0, 00:20:17.779 "data_size": 65536 00:20:17.779 }, 00:20:17.779 { 00:20:17.779 "name": "BaseBdev2", 00:20:17.779 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:17.779 "is_configured": true, 00:20:17.779 "data_offset": 0, 00:20:17.779 "data_size": 65536 00:20:17.779 } 00:20:17.779 ] 00:20:17.779 }' 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.779 14:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:18.736 96.14 IOPS, 288.43 MiB/s [2024-11-04T14:52:48.628Z] [2024-11-04 14:52:48.489088] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.736 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.736 [2024-11-04 14:52:48.589061] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:18.736 [2024-11-04 14:52:48.591233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.995 "name": "raid_bdev1", 00:20:18.995 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:18.995 "strip_size_kb": 0, 00:20:18.995 "state": "online", 00:20:18.995 "raid_level": "raid1", 00:20:18.995 "superblock": false, 00:20:18.995 "num_base_bdevs": 2, 00:20:18.995 "num_base_bdevs_discovered": 2, 00:20:18.995 "num_base_bdevs_operational": 2, 00:20:18.995 "base_bdevs_list": [ 00:20:18.995 { 00:20:18.995 "name": "spare", 00:20:18.995 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:18.995 "is_configured": true, 00:20:18.995 "data_offset": 0, 00:20:18.995 "data_size": 65536 00:20:18.995 }, 00:20:18.995 { 00:20:18.995 "name": "BaseBdev2", 00:20:18.995 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:18.995 "is_configured": true, 00:20:18.995 "data_offset": 0, 00:20:18.995 "data_size": 65536 00:20:18.995 } 00:20:18.995 ] 00:20:18.995 }' 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.995 "name": "raid_bdev1", 00:20:18.995 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:18.995 "strip_size_kb": 0, 00:20:18.995 "state": "online", 00:20:18.995 "raid_level": "raid1", 00:20:18.995 "superblock": false, 00:20:18.995 "num_base_bdevs": 2, 00:20:18.995 "num_base_bdevs_discovered": 2, 00:20:18.995 "num_base_bdevs_operational": 2, 00:20:18.995 "base_bdevs_list": [ 00:20:18.995 { 00:20:18.995 "name": "spare", 00:20:18.995 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:18.995 "is_configured": true, 00:20:18.995 "data_offset": 0, 00:20:18.995 "data_size": 65536 00:20:18.995 }, 00:20:18.995 { 00:20:18.995 "name": "BaseBdev2", 00:20:18.995 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:18.995 "is_configured": true, 00:20:18.995 "data_offset": 0, 00:20:18.995 "data_size": 65536 00:20:18.995 } 00:20:18.995 ] 00:20:18.995 }' 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:18.995 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.254 "name": "raid_bdev1", 00:20:19.254 "uuid": "d310d2b8-ccba-403b-8471-c2e25c160a82", 00:20:19.254 "strip_size_kb": 0, 00:20:19.254 "state": "online", 00:20:19.254 "raid_level": "raid1", 00:20:19.254 "superblock": false, 00:20:19.254 "num_base_bdevs": 2, 00:20:19.254 "num_base_bdevs_discovered": 2, 00:20:19.254 "num_base_bdevs_operational": 2, 00:20:19.254 "base_bdevs_list": [ 00:20:19.254 { 00:20:19.254 "name": "spare", 00:20:19.254 "uuid": "5b01929e-8cd4-51e9-a1aa-b6b329e41698", 00:20:19.254 "is_configured": true, 00:20:19.254 "data_offset": 0, 00:20:19.254 "data_size": 65536 00:20:19.254 }, 00:20:19.254 { 00:20:19.254 "name": "BaseBdev2", 00:20:19.254 "uuid": "13574294-2822-56ec-8089-1729ce208772", 00:20:19.254 "is_configured": true, 00:20:19.254 "data_offset": 0, 00:20:19.254 "data_size": 65536 00:20:19.254 } 00:20:19.254 ] 00:20:19.254 }' 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.254 14:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.801 88.12 IOPS, 264.38 MiB/s [2024-11-04T14:52:49.693Z] 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.801 [2024-11-04 14:52:49.460170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.801 [2024-11-04 14:52:49.460241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.801 00:20:19.801 Latency(us) 00:20:19.801 [2024-11-04T14:52:49.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.801 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:19.801 raid_bdev1 : 8.17 87.05 261.16 0.00 0.00 15310.26 305.34 118203.11 00:20:19.801 [2024-11-04T14:52:49.693Z] =================================================================================================================== 00:20:19.801 [2024-11-04T14:52:49.693Z] Total : 87.05 261.16 0.00 0.00 15310.26 305.34 118203.11 00:20:19.801 [2024-11-04 14:52:49.530905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.801 [2024-11-04 14:52:49.530985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.801 [2024-11-04 14:52:49.531100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.801 [2024-11-04 14:52:49.531125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:19.801 { 00:20:19.801 "results": [ 00:20:19.801 { 00:20:19.801 "job": "raid_bdev1", 00:20:19.801 "core_mask": "0x1", 00:20:19.801 "workload": "randrw", 00:20:19.801 "percentage": 50, 00:20:19.801 "status": "finished", 00:20:19.801 "queue_depth": 2, 00:20:19.801 "io_size": 3145728, 00:20:19.801 "runtime": 8.167263, 00:20:19.801 "iops": 87.05486770782329, 00:20:19.801 "mibps": 261.1646031234699, 00:20:19.801 "io_failed": 0, 00:20:19.801 "io_timeout": 0, 00:20:19.801 "avg_latency_us": 15310.262483058432, 00:20:19.801 "min_latency_us": 305.3381818181818, 00:20:19.801 "max_latency_us": 118203.11272727273 00:20:19.801 } 00:20:19.801 ], 00:20:19.801 "core_count": 1 00:20:19.801 } 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.801 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:20.059 /dev/nbd0 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.059 1+0 records in 00:20:20.059 1+0 records out 00:20:20.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361929 s, 11.3 MB/s 00:20:20.059 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:20.318 14:52:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:20.577 /dev/nbd1 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.577 1+0 records in 00:20:20.577 1+0 records out 00:20:20.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412794 s, 9.9 MB/s 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:20.577 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.835 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.094 14:52:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76844 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76844 ']' 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76844 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76844 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:21.352 killing process with pid 76844 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76844' 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76844 00:20:21.352 Received shutdown signal, test time was about 9.884514 seconds 00:20:21.352 00:20:21.352 Latency(us) 00:20:21.352 [2024-11-04T14:52:51.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.352 [2024-11-04T14:52:51.244Z] =================================================================================================================== 00:20:21.352 [2024-11-04T14:52:51.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.352 [2024-11-04 14:52:51.225302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.352 14:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76844 00:20:21.611 [2024-11-04 14:52:51.447833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:22.985 14:52:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:22.986 00:20:22.986 real 0m13.266s 00:20:22.986 user 0m17.273s 00:20:22.986 sys 0m1.543s 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:22.986 ************************************ 00:20:22.986 END TEST raid_rebuild_test_io 00:20:22.986 ************************************ 00:20:22.986 14:52:52 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:20:22.986 14:52:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:22.986 14:52:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:22.986 14:52:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:22.986 ************************************ 00:20:22.986 START TEST raid_rebuild_test_sb_io 00:20:22.986 ************************************ 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77231 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77231 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77231 ']' 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:22.986 14:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:22.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:22.986 Zero copy mechanism will not be used. 00:20:22.986 [2024-11-04 14:52:52.784858] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:20:22.986 [2024-11-04 14:52:52.785028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77231 ] 00:20:23.244 [2024-11-04 14:52:52.962181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.244 [2024-11-04 14:52:53.097491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.503 [2024-11-04 14:52:53.305482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.503 [2024-11-04 14:52:53.305564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.070 BaseBdev1_malloc 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.070 [2024-11-04 14:52:53.861134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:24.070 [2024-11-04 14:52:53.861265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.070 [2024-11-04 14:52:53.861301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:24.070 [2024-11-04 14:52:53.861321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.070 [2024-11-04 14:52:53.864159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.070 [2024-11-04 14:52:53.864220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:24.070 BaseBdev1 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.070 BaseBdev2_malloc 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.070 [2024-11-04 14:52:53.914594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:24.070 [2024-11-04 14:52:53.914671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.070 [2024-11-04 14:52:53.914699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:24.070 [2024-11-04 14:52:53.914720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.070 [2024-11-04 14:52:53.917643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.070 [2024-11-04 14:52:53.917686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:24.070 BaseBdev2 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.070 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.329 spare_malloc 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.329 spare_delay 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.329 [2024-11-04 14:52:53.986302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.329 [2024-11-04 14:52:53.986378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.329 [2024-11-04 14:52:53.986406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:24.329 [2024-11-04 14:52:53.986424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.329 [2024-11-04 14:52:53.989189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.329 [2024-11-04 14:52:53.989289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.329 spare 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:24.329 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.330 [2024-11-04 14:52:53.994383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.330 [2024-11-04 14:52:53.996746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.330 [2024-11-04 14:52:53.996967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:24.330 [2024-11-04 14:52:53.997003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:24.330 [2024-11-04 14:52:53.997359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:24.330 [2024-11-04 14:52:53.997633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:24.330 [2024-11-04 14:52:53.997658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:24.330 [2024-11-04 14:52:53.997843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.330 14:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.330 "name": "raid_bdev1", 00:20:24.330 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:24.330 "strip_size_kb": 0, 00:20:24.330 "state": "online", 00:20:24.330 "raid_level": "raid1", 00:20:24.330 "superblock": true, 00:20:24.330 "num_base_bdevs": 2, 00:20:24.330 "num_base_bdevs_discovered": 2, 00:20:24.330 "num_base_bdevs_operational": 2, 00:20:24.330 "base_bdevs_list": [ 00:20:24.330 { 00:20:24.330 "name": "BaseBdev1", 00:20:24.330 "uuid": "683bd8a9-fe24-57df-ba4c-31f8ff779b6c", 00:20:24.330 "is_configured": true, 00:20:24.330 "data_offset": 2048, 00:20:24.330 "data_size": 63488 00:20:24.330 }, 00:20:24.330 { 00:20:24.330 "name": "BaseBdev2", 00:20:24.330 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:24.330 "is_configured": true, 00:20:24.330 "data_offset": 2048, 00:20:24.330 "data_size": 63488 00:20:24.330 } 00:20:24.330 ] 00:20:24.330 }' 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.330 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 [2024-11-04 14:52:54.558912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 [2024-11-04 14:52:54.658510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.897 "name": "raid_bdev1", 00:20:24.897 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:24.897 "strip_size_kb": 0, 00:20:24.897 "state": "online", 00:20:24.897 "raid_level": "raid1", 00:20:24.897 "superblock": true, 00:20:24.897 "num_base_bdevs": 2, 00:20:24.897 "num_base_bdevs_discovered": 1, 00:20:24.897 "num_base_bdevs_operational": 1, 00:20:24.897 "base_bdevs_list": [ 00:20:24.897 { 00:20:24.897 "name": null, 00:20:24.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.897 "is_configured": false, 00:20:24.897 "data_offset": 0, 00:20:24.897 "data_size": 63488 00:20:24.897 }, 00:20:24.897 { 00:20:24.897 "name": "BaseBdev2", 00:20:24.897 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:24.897 "is_configured": true, 00:20:24.897 "data_offset": 2048, 00:20:24.897 "data_size": 63488 00:20:24.897 } 00:20:24.897 ] 00:20:24.897 }' 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.897 14:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 [2024-11-04 14:52:54.786695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:25.156 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:25.156 Zero copy mechanism will not be used. 00:20:25.156 Running I/O for 60 seconds... 00:20:25.431 14:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:25.431 14:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.431 14:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.431 [2024-11-04 14:52:55.173350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.431 14:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.431 14:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:25.431 [2024-11-04 14:52:55.250067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:25.431 [2024-11-04 14:52:55.252629] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.723 [2024-11-04 14:52:55.391180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:25.723 [2024-11-04 14:52:55.525682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:25.723 [2024-11-04 14:52:55.526064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:26.238 167.00 IOPS, 501.00 MiB/s [2024-11-04T14:52:56.130Z] [2024-11-04 14:52:56.019221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.496 [2024-11-04 14:52:56.256093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:26.496 [2024-11-04 14:52:56.256673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:26.496 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.496 "name": "raid_bdev1", 00:20:26.496 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:26.496 "strip_size_kb": 0, 00:20:26.496 "state": "online", 00:20:26.497 "raid_level": "raid1", 00:20:26.497 "superblock": true, 00:20:26.497 "num_base_bdevs": 2, 00:20:26.497 "num_base_bdevs_discovered": 2, 00:20:26.497 "num_base_bdevs_operational": 2, 00:20:26.497 "process": { 00:20:26.497 "type": "rebuild", 00:20:26.497 "target": "spare", 00:20:26.497 "progress": { 00:20:26.497 "blocks": 12288, 00:20:26.497 "percent": 19 00:20:26.497 } 00:20:26.497 }, 00:20:26.497 "base_bdevs_list": [ 00:20:26.497 { 00:20:26.497 "name": "spare", 00:20:26.497 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:26.497 "is_configured": true, 00:20:26.497 "data_offset": 2048, 00:20:26.497 "data_size": 63488 00:20:26.497 }, 00:20:26.497 { 00:20:26.497 "name": "BaseBdev2", 00:20:26.497 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:26.497 "is_configured": true, 00:20:26.497 "data_offset": 2048, 00:20:26.497 "data_size": 63488 00:20:26.497 } 00:20:26.497 ] 00:20:26.497 }' 00:20:26.497 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.497 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.497 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.754 [2024-11-04 14:52:56.394689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.754 [2024-11-04 14:52:56.468315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:26.754 [2024-11-04 14:52:56.468677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:26.754 [2024-11-04 14:52:56.570369] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:26.754 [2024-11-04 14:52:56.580843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.754 [2024-11-04 14:52:56.580899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.754 [2024-11-04 14:52:56.580930] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:26.754 [2024-11-04 14:52:56.617302] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.754 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.012 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.012 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.012 "name": "raid_bdev1", 00:20:27.012 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:27.012 "strip_size_kb": 0, 00:20:27.012 "state": "online", 00:20:27.012 "raid_level": "raid1", 00:20:27.012 "superblock": true, 00:20:27.012 "num_base_bdevs": 2, 00:20:27.012 "num_base_bdevs_discovered": 1, 00:20:27.012 "num_base_bdevs_operational": 1, 00:20:27.012 "base_bdevs_list": [ 00:20:27.012 { 00:20:27.012 "name": null, 00:20:27.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.012 "is_configured": false, 00:20:27.012 "data_offset": 0, 00:20:27.012 "data_size": 63488 00:20:27.012 }, 00:20:27.012 { 00:20:27.012 "name": "BaseBdev2", 00:20:27.012 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:27.012 "is_configured": true, 00:20:27.012 "data_offset": 2048, 00:20:27.012 "data_size": 63488 00:20:27.012 } 00:20:27.012 ] 00:20:27.012 }' 00:20:27.012 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.012 14:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.577 134.50 IOPS, 403.50 MiB/s [2024-11-04T14:52:57.469Z] 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.577 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.578 "name": "raid_bdev1", 00:20:27.578 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:27.578 "strip_size_kb": 0, 00:20:27.578 "state": "online", 00:20:27.578 "raid_level": "raid1", 00:20:27.578 "superblock": true, 00:20:27.578 "num_base_bdevs": 2, 00:20:27.578 "num_base_bdevs_discovered": 1, 00:20:27.578 "num_base_bdevs_operational": 1, 00:20:27.578 "base_bdevs_list": [ 00:20:27.578 { 00:20:27.578 "name": null, 00:20:27.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.578 "is_configured": false, 00:20:27.578 "data_offset": 0, 00:20:27.578 "data_size": 63488 00:20:27.578 }, 00:20:27.578 { 00:20:27.578 "name": "BaseBdev2", 00:20:27.578 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:27.578 "is_configured": true, 00:20:27.578 "data_offset": 2048, 00:20:27.578 "data_size": 63488 00:20:27.578 } 00:20:27.578 ] 00:20:27.578 }' 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.578 [2024-11-04 14:52:57.389353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.578 14:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:27.578 [2024-11-04 14:52:57.459875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:27.578 [2024-11-04 14:52:57.462472] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.836 [2024-11-04 14:52:57.588961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:27.836 [2024-11-04 14:52:57.589560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:27.836 [2024-11-04 14:52:57.709865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:27.836 [2024-11-04 14:52:57.710215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:28.351 156.67 IOPS, 470.00 MiB/s [2024-11-04T14:52:58.243Z] [2024-11-04 14:52:58.061902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:28.609 [2024-11-04 14:52:58.427869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.609 "name": "raid_bdev1", 00:20:28.609 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:28.609 "strip_size_kb": 0, 00:20:28.609 "state": "online", 00:20:28.609 "raid_level": "raid1", 00:20:28.609 "superblock": true, 00:20:28.609 "num_base_bdevs": 2, 00:20:28.609 "num_base_bdevs_discovered": 2, 00:20:28.609 "num_base_bdevs_operational": 2, 00:20:28.609 "process": { 00:20:28.609 "type": "rebuild", 00:20:28.609 "target": "spare", 00:20:28.609 "progress": { 00:20:28.609 "blocks": 14336, 00:20:28.609 "percent": 22 00:20:28.609 } 00:20:28.609 }, 00:20:28.609 "base_bdevs_list": [ 00:20:28.609 { 00:20:28.609 "name": "spare", 00:20:28.609 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:28.609 "is_configured": true, 00:20:28.609 "data_offset": 2048, 00:20:28.609 "data_size": 63488 00:20:28.609 }, 00:20:28.609 { 00:20:28.609 "name": "BaseBdev2", 00:20:28.609 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:28.609 "is_configured": true, 00:20:28.609 "data_offset": 2048, 00:20:28.609 "data_size": 63488 00:20:28.609 } 00:20:28.609 ] 00:20:28.609 }' 00:20:28.609 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.888 [2024-11-04 14:52:58.563742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:28.888 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.888 "name": "raid_bdev1", 00:20:28.888 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:28.888 "strip_size_kb": 0, 00:20:28.888 "state": "online", 00:20:28.888 "raid_level": "raid1", 00:20:28.888 "superblock": true, 00:20:28.888 "num_base_bdevs": 2, 00:20:28.888 "num_base_bdevs_discovered": 2, 00:20:28.888 "num_base_bdevs_operational": 2, 00:20:28.888 "process": { 00:20:28.888 "type": "rebuild", 00:20:28.888 "target": "spare", 00:20:28.888 "progress": { 00:20:28.888 "blocks": 16384, 00:20:28.888 "percent": 25 00:20:28.888 } 00:20:28.888 }, 00:20:28.888 "base_bdevs_list": [ 00:20:28.888 { 00:20:28.888 "name": "spare", 00:20:28.888 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:28.888 "is_configured": true, 00:20:28.888 "data_offset": 2048, 00:20:28.888 "data_size": 63488 00:20:28.888 }, 00:20:28.888 { 00:20:28.888 "name": "BaseBdev2", 00:20:28.888 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:28.888 "is_configured": true, 00:20:28.888 "data_offset": 2048, 00:20:28.888 "data_size": 63488 00:20:28.888 } 00:20:28.888 ] 00:20:28.888 }' 00:20:28.888 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.889 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.889 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.889 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.889 14:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:29.146 142.50 IOPS, 427.50 MiB/s [2024-11-04T14:52:59.038Z] [2024-11-04 14:52:58.810405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:29.404 [2024-11-04 14:52:59.040114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:29.971 [2024-11-04 14:52:59.702825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.971 124.60 IOPS, 373.80 MiB/s [2024-11-04T14:52:59.863Z] 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.971 "name": "raid_bdev1", 00:20:29.971 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:29.971 "strip_size_kb": 0, 00:20:29.971 "state": "online", 00:20:29.971 "raid_level": "raid1", 00:20:29.971 "superblock": true, 00:20:29.971 "num_base_bdevs": 2, 00:20:29.971 "num_base_bdevs_discovered": 2, 00:20:29.971 "num_base_bdevs_operational": 2, 00:20:29.971 "process": { 00:20:29.971 "type": "rebuild", 00:20:29.971 "target": "spare", 00:20:29.971 "progress": { 00:20:29.971 "blocks": 32768, 00:20:29.971 "percent": 51 00:20:29.971 } 00:20:29.971 }, 00:20:29.971 "base_bdevs_list": [ 00:20:29.971 { 00:20:29.971 "name": "spare", 00:20:29.971 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:29.971 "is_configured": true, 00:20:29.971 "data_offset": 2048, 00:20:29.971 "data_size": 63488 00:20:29.971 }, 00:20:29.971 { 00:20:29.971 "name": "BaseBdev2", 00:20:29.971 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:29.971 "is_configured": true, 00:20:29.971 "data_offset": 2048, 00:20:29.971 "data_size": 63488 00:20:29.971 } 00:20:29.971 ] 00:20:29.971 }' 00:20:29.971 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.971 [2024-11-04 14:52:59.829140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:30.229 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.229 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.229 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.229 14:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:30.487 [2024-11-04 14:53:00.272763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:30.745 [2024-11-04 14:53:00.607868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:31.261 109.67 IOPS, 329.00 MiB/s [2024-11-04T14:53:01.153Z] 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.261 "name": "raid_bdev1", 00:20:31.261 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:31.261 "strip_size_kb": 0, 00:20:31.261 "state": "online", 00:20:31.261 "raid_level": "raid1", 00:20:31.261 "superblock": true, 00:20:31.261 "num_base_bdevs": 2, 00:20:31.261 "num_base_bdevs_discovered": 2, 00:20:31.261 "num_base_bdevs_operational": 2, 00:20:31.261 "process": { 00:20:31.261 "type": "rebuild", 00:20:31.261 "target": "spare", 00:20:31.261 "progress": { 00:20:31.261 "blocks": 51200, 00:20:31.261 "percent": 80 00:20:31.261 } 00:20:31.261 }, 00:20:31.261 "base_bdevs_list": [ 00:20:31.261 { 00:20:31.261 "name": "spare", 00:20:31.261 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:31.261 "is_configured": true, 00:20:31.261 "data_offset": 2048, 00:20:31.261 "data_size": 63488 00:20:31.261 }, 00:20:31.261 { 00:20:31.261 "name": "BaseBdev2", 00:20:31.261 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:31.261 "is_configured": true, 00:20:31.261 "data_offset": 2048, 00:20:31.261 "data_size": 63488 00:20:31.261 } 00:20:31.261 ] 00:20:31.261 }' 00:20:31.261 14:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.261 14:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.261 14:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.261 14:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.261 14:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:31.827 [2024-11-04 14:53:01.621556] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:32.085 [2024-11-04 14:53:01.729433] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:32.085 [2024-11-04 14:53:01.731626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.343 99.71 IOPS, 299.14 MiB/s [2024-11-04T14:53:02.235Z] 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.343 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.344 "name": "raid_bdev1", 00:20:32.344 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:32.344 "strip_size_kb": 0, 00:20:32.344 "state": "online", 00:20:32.344 "raid_level": "raid1", 00:20:32.344 "superblock": true, 00:20:32.344 "num_base_bdevs": 2, 00:20:32.344 "num_base_bdevs_discovered": 2, 00:20:32.344 "num_base_bdevs_operational": 2, 00:20:32.344 "base_bdevs_list": [ 00:20:32.344 { 00:20:32.344 "name": "spare", 00:20:32.344 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:32.344 "is_configured": true, 00:20:32.344 "data_offset": 2048, 00:20:32.344 "data_size": 63488 00:20:32.344 }, 00:20:32.344 { 00:20:32.344 "name": "BaseBdev2", 00:20:32.344 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:32.344 "is_configured": true, 00:20:32.344 "data_offset": 2048, 00:20:32.344 "data_size": 63488 00:20:32.344 } 00:20:32.344 ] 00:20:32.344 }' 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:32.344 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.602 "name": "raid_bdev1", 00:20:32.602 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:32.602 "strip_size_kb": 0, 00:20:32.602 "state": "online", 00:20:32.602 "raid_level": "raid1", 00:20:32.602 "superblock": true, 00:20:32.602 "num_base_bdevs": 2, 00:20:32.602 "num_base_bdevs_discovered": 2, 00:20:32.602 "num_base_bdevs_operational": 2, 00:20:32.602 "base_bdevs_list": [ 00:20:32.602 { 00:20:32.602 "name": "spare", 00:20:32.602 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:32.602 "is_configured": true, 00:20:32.602 "data_offset": 2048, 00:20:32.602 "data_size": 63488 00:20:32.602 }, 00:20:32.602 { 00:20:32.602 "name": "BaseBdev2", 00:20:32.602 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:32.602 "is_configured": true, 00:20:32.602 "data_offset": 2048, 00:20:32.602 "data_size": 63488 00:20:32.602 } 00:20:32.602 ] 00:20:32.602 }' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.602 "name": "raid_bdev1", 00:20:32.602 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:32.602 "strip_size_kb": 0, 00:20:32.602 "state": "online", 00:20:32.602 "raid_level": "raid1", 00:20:32.602 "superblock": true, 00:20:32.602 "num_base_bdevs": 2, 00:20:32.602 "num_base_bdevs_discovered": 2, 00:20:32.602 "num_base_bdevs_operational": 2, 00:20:32.602 "base_bdevs_list": [ 00:20:32.602 { 00:20:32.602 "name": "spare", 00:20:32.602 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:32.602 "is_configured": true, 00:20:32.602 "data_offset": 2048, 00:20:32.602 "data_size": 63488 00:20:32.602 }, 00:20:32.602 { 00:20:32.602 "name": "BaseBdev2", 00:20:32.602 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:32.602 "is_configured": true, 00:20:32.602 "data_offset": 2048, 00:20:32.602 "data_size": 63488 00:20:32.602 } 00:20:32.602 ] 00:20:32.602 }' 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.602 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.167 91.50 IOPS, 274.50 MiB/s [2024-11-04T14:53:03.059Z] 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:33.167 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.167 14:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.167 [2024-11-04 14:53:02.952090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.168 [2024-11-04 14:53:02.952147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.168 00:20:33.168 Latency(us) 00:20:33.168 [2024-11-04T14:53:03.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.168 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:33.168 raid_bdev1 : 8.26 89.84 269.53 0.00 0.00 15126.64 286.72 119632.99 00:20:33.168 [2024-11-04T14:53:03.060Z] =================================================================================================================== 00:20:33.168 [2024-11-04T14:53:03.060Z] Total : 89.84 269.53 0.00 0.00 15126.64 286.72 119632.99 00:20:33.426 [2024-11-04 14:53:03.070263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.426 [2024-11-04 14:53:03.070338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.426 [2024-11-04 14:53:03.070471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.426 [2024-11-04 14:53:03.070496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:33.426 { 00:20:33.426 "results": [ 00:20:33.426 { 00:20:33.426 "job": "raid_bdev1", 00:20:33.426 "core_mask": "0x1", 00:20:33.426 "workload": "randrw", 00:20:33.426 "percentage": 50, 00:20:33.426 "status": "finished", 00:20:33.426 "queue_depth": 2, 00:20:33.426 "io_size": 3145728, 00:20:33.426 "runtime": 8.258777, 00:20:33.426 "iops": 89.84381101463327, 00:20:33.426 "mibps": 269.5314330438998, 00:20:33.426 "io_failed": 0, 00:20:33.426 "io_timeout": 0, 00:20:33.426 "avg_latency_us": 15126.639157069347, 00:20:33.426 "min_latency_us": 286.72, 00:20:33.426 "max_latency_us": 119632.98909090909 00:20:33.426 } 00:20:33.426 ], 00:20:33.426 "core_count": 1 00:20:33.426 } 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:33.426 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:33.684 /dev/nbd0 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:33.684 1+0 records in 00:20:33.684 1+0 records out 00:20:33.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337528 s, 12.1 MB/s 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:33.684 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:33.943 /dev/nbd1 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:33.943 1+0 records in 00:20:33.943 1+0 records out 00:20:33.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393401 s, 10.4 MB/s 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:33.943 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.201 14:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.459 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.717 [2024-11-04 14:53:04.583769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.717 [2024-11-04 14:53:04.583853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.717 [2024-11-04 14:53:04.583897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:34.717 [2024-11-04 14:53:04.583914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.717 [2024-11-04 14:53:04.587252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.717 [2024-11-04 14:53:04.587312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.717 [2024-11-04 14:53:04.587428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:34.717 [2024-11-04 14:53:04.587509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.717 [2024-11-04 14:53:04.587717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.717 spare 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.717 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 [2024-11-04 14:53:04.687929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:34.975 [2024-11-04 14:53:04.687959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:34.975 [2024-11-04 14:53:04.688372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:20:34.975 [2024-11-04 14:53:04.688647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:34.975 [2024-11-04 14:53:04.688711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:34.975 [2024-11-04 14:53:04.688974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.975 "name": "raid_bdev1", 00:20:34.975 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:34.975 "strip_size_kb": 0, 00:20:34.975 "state": "online", 00:20:34.975 "raid_level": "raid1", 00:20:34.975 "superblock": true, 00:20:34.975 "num_base_bdevs": 2, 00:20:34.975 "num_base_bdevs_discovered": 2, 00:20:34.975 "num_base_bdevs_operational": 2, 00:20:34.975 "base_bdevs_list": [ 00:20:34.975 { 00:20:34.975 "name": "spare", 00:20:34.975 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:34.975 "is_configured": true, 00:20:34.975 "data_offset": 2048, 00:20:34.975 "data_size": 63488 00:20:34.975 }, 00:20:34.975 { 00:20:34.975 "name": "BaseBdev2", 00:20:34.975 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:34.975 "is_configured": true, 00:20:34.975 "data_offset": 2048, 00:20:34.975 "data_size": 63488 00:20:34.975 } 00:20:34.975 ] 00:20:34.975 }' 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.975 14:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.541 "name": "raid_bdev1", 00:20:35.541 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:35.541 "strip_size_kb": 0, 00:20:35.541 "state": "online", 00:20:35.541 "raid_level": "raid1", 00:20:35.541 "superblock": true, 00:20:35.541 "num_base_bdevs": 2, 00:20:35.541 "num_base_bdevs_discovered": 2, 00:20:35.541 "num_base_bdevs_operational": 2, 00:20:35.541 "base_bdevs_list": [ 00:20:35.541 { 00:20:35.541 "name": "spare", 00:20:35.541 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:35.541 "is_configured": true, 00:20:35.541 "data_offset": 2048, 00:20:35.541 "data_size": 63488 00:20:35.541 }, 00:20:35.541 { 00:20:35.541 "name": "BaseBdev2", 00:20:35.541 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:35.541 "is_configured": true, 00:20:35.541 "data_offset": 2048, 00:20:35.541 "data_size": 63488 00:20:35.541 } 00:20:35.541 ] 00:20:35.541 }' 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:35.541 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.799 [2024-11-04 14:53:05.440407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.799 "name": "raid_bdev1", 00:20:35.799 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:35.799 "strip_size_kb": 0, 00:20:35.799 "state": "online", 00:20:35.799 "raid_level": "raid1", 00:20:35.799 "superblock": true, 00:20:35.799 "num_base_bdevs": 2, 00:20:35.799 "num_base_bdevs_discovered": 1, 00:20:35.799 "num_base_bdevs_operational": 1, 00:20:35.799 "base_bdevs_list": [ 00:20:35.799 { 00:20:35.799 "name": null, 00:20:35.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.799 "is_configured": false, 00:20:35.799 "data_offset": 0, 00:20:35.799 "data_size": 63488 00:20:35.799 }, 00:20:35.799 { 00:20:35.799 "name": "BaseBdev2", 00:20:35.799 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:35.799 "is_configured": true, 00:20:35.799 "data_offset": 2048, 00:20:35.799 "data_size": 63488 00:20:35.799 } 00:20:35.799 ] 00:20:35.799 }' 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.799 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.364 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.364 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.364 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.364 [2024-11-04 14:53:05.968854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.364 [2024-11-04 14:53:05.969131] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:36.364 [2024-11-04 14:53:05.969153] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:36.364 [2024-11-04 14:53:05.969221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.364 [2024-11-04 14:53:05.986989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:20:36.364 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.364 14:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:36.364 [2024-11-04 14:53:05.989787] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.298 14:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.298 "name": "raid_bdev1", 00:20:37.298 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:37.298 "strip_size_kb": 0, 00:20:37.298 "state": "online", 00:20:37.298 "raid_level": "raid1", 00:20:37.298 "superblock": true, 00:20:37.298 "num_base_bdevs": 2, 00:20:37.298 "num_base_bdevs_discovered": 2, 00:20:37.298 "num_base_bdevs_operational": 2, 00:20:37.298 "process": { 00:20:37.298 "type": "rebuild", 00:20:37.298 "target": "spare", 00:20:37.298 "progress": { 00:20:37.298 "blocks": 20480, 00:20:37.298 "percent": 32 00:20:37.298 } 00:20:37.298 }, 00:20:37.298 "base_bdevs_list": [ 00:20:37.298 { 00:20:37.298 "name": "spare", 00:20:37.298 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:37.298 "is_configured": true, 00:20:37.298 "data_offset": 2048, 00:20:37.298 "data_size": 63488 00:20:37.298 }, 00:20:37.298 { 00:20:37.298 "name": "BaseBdev2", 00:20:37.298 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:37.298 "is_configured": true, 00:20:37.298 "data_offset": 2048, 00:20:37.298 "data_size": 63488 00:20:37.298 } 00:20:37.298 ] 00:20:37.298 }' 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.298 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.298 [2024-11-04 14:53:07.167643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.556 [2024-11-04 14:53:07.199414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.556 [2024-11-04 14:53:07.199530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.556 [2024-11-04 14:53:07.199560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.556 [2024-11-04 14:53:07.199588] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.556 "name": "raid_bdev1", 00:20:37.556 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:37.556 "strip_size_kb": 0, 00:20:37.556 "state": "online", 00:20:37.556 "raid_level": "raid1", 00:20:37.556 "superblock": true, 00:20:37.556 "num_base_bdevs": 2, 00:20:37.556 "num_base_bdevs_discovered": 1, 00:20:37.556 "num_base_bdevs_operational": 1, 00:20:37.556 "base_bdevs_list": [ 00:20:37.556 { 00:20:37.556 "name": null, 00:20:37.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.556 "is_configured": false, 00:20:37.556 "data_offset": 0, 00:20:37.556 "data_size": 63488 00:20:37.556 }, 00:20:37.556 { 00:20:37.556 "name": "BaseBdev2", 00:20:37.556 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:37.556 "is_configured": true, 00:20:37.556 "data_offset": 2048, 00:20:37.556 "data_size": 63488 00:20:37.556 } 00:20:37.556 ] 00:20:37.556 }' 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.556 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.123 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:38.123 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.123 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.123 [2024-11-04 14:53:07.793873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:38.123 [2024-11-04 14:53:07.793979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.123 [2024-11-04 14:53:07.794021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:38.123 [2024-11-04 14:53:07.794042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.123 [2024-11-04 14:53:07.794787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.123 [2024-11-04 14:53:07.794823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:38.123 [2024-11-04 14:53:07.794974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:38.123 [2024-11-04 14:53:07.794994] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:38.123 [2024-11-04 14:53:07.795011] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:38.123 [2024-11-04 14:53:07.795040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.123 [2024-11-04 14:53:07.812980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:20:38.123 spare 00:20:38.123 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.123 14:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:38.123 [2024-11-04 14:53:07.815725] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.057 "name": "raid_bdev1", 00:20:39.057 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:39.057 "strip_size_kb": 0, 00:20:39.057 "state": "online", 00:20:39.057 "raid_level": "raid1", 00:20:39.057 "superblock": true, 00:20:39.057 "num_base_bdevs": 2, 00:20:39.057 "num_base_bdevs_discovered": 2, 00:20:39.057 "num_base_bdevs_operational": 2, 00:20:39.057 "process": { 00:20:39.057 "type": "rebuild", 00:20:39.057 "target": "spare", 00:20:39.057 "progress": { 00:20:39.057 "blocks": 20480, 00:20:39.057 "percent": 32 00:20:39.057 } 00:20:39.057 }, 00:20:39.057 "base_bdevs_list": [ 00:20:39.057 { 00:20:39.057 "name": "spare", 00:20:39.057 "uuid": "112f3b82-87a5-5b34-b917-8b0e2b45498d", 00:20:39.057 "is_configured": true, 00:20:39.057 "data_offset": 2048, 00:20:39.057 "data_size": 63488 00:20:39.057 }, 00:20:39.057 { 00:20:39.057 "name": "BaseBdev2", 00:20:39.057 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:39.057 "is_configured": true, 00:20:39.057 "data_offset": 2048, 00:20:39.057 "data_size": 63488 00:20:39.057 } 00:20:39.057 ] 00:20:39.057 }' 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.057 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.316 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.316 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:39.316 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.316 14:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.316 [2024-11-04 14:53:09.001797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.316 [2024-11-04 14:53:09.025414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:39.316 [2024-11-04 14:53:09.025509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.316 [2024-11-04 14:53:09.025535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.316 [2024-11-04 14:53:09.025551] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.316 "name": "raid_bdev1", 00:20:39.316 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:39.316 "strip_size_kb": 0, 00:20:39.316 "state": "online", 00:20:39.316 "raid_level": "raid1", 00:20:39.316 "superblock": true, 00:20:39.316 "num_base_bdevs": 2, 00:20:39.316 "num_base_bdevs_discovered": 1, 00:20:39.316 "num_base_bdevs_operational": 1, 00:20:39.316 "base_bdevs_list": [ 00:20:39.316 { 00:20:39.316 "name": null, 00:20:39.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.316 "is_configured": false, 00:20:39.316 "data_offset": 0, 00:20:39.316 "data_size": 63488 00:20:39.316 }, 00:20:39.316 { 00:20:39.316 "name": "BaseBdev2", 00:20:39.316 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:39.316 "is_configured": true, 00:20:39.316 "data_offset": 2048, 00:20:39.316 "data_size": 63488 00:20:39.316 } 00:20:39.316 ] 00:20:39.316 }' 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.316 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.884 "name": "raid_bdev1", 00:20:39.884 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:39.884 "strip_size_kb": 0, 00:20:39.884 "state": "online", 00:20:39.884 "raid_level": "raid1", 00:20:39.884 "superblock": true, 00:20:39.884 "num_base_bdevs": 2, 00:20:39.884 "num_base_bdevs_discovered": 1, 00:20:39.884 "num_base_bdevs_operational": 1, 00:20:39.884 "base_bdevs_list": [ 00:20:39.884 { 00:20:39.884 "name": null, 00:20:39.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.884 "is_configured": false, 00:20:39.884 "data_offset": 0, 00:20:39.884 "data_size": 63488 00:20:39.884 }, 00:20:39.884 { 00:20:39.884 "name": "BaseBdev2", 00:20:39.884 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:39.884 "is_configured": true, 00:20:39.884 "data_offset": 2048, 00:20:39.884 "data_size": 63488 00:20:39.884 } 00:20:39.884 ] 00:20:39.884 }' 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.884 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.884 [2024-11-04 14:53:09.771308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:39.884 [2024-11-04 14:53:09.771409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.884 [2024-11-04 14:53:09.771439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:39.884 [2024-11-04 14:53:09.771461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.884 [2024-11-04 14:53:09.772016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.884 [2024-11-04 14:53:09.772055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:39.884 [2024-11-04 14:53:09.772153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:39.884 [2024-11-04 14:53:09.772182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:39.884 [2024-11-04 14:53:09.772194] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:39.884 [2024-11-04 14:53:09.772213] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:40.142 BaseBdev1 00:20:40.142 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.142 14:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.086 "name": "raid_bdev1", 00:20:41.086 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:41.086 "strip_size_kb": 0, 00:20:41.086 "state": "online", 00:20:41.086 "raid_level": "raid1", 00:20:41.086 "superblock": true, 00:20:41.086 "num_base_bdevs": 2, 00:20:41.086 "num_base_bdevs_discovered": 1, 00:20:41.086 "num_base_bdevs_operational": 1, 00:20:41.086 "base_bdevs_list": [ 00:20:41.086 { 00:20:41.086 "name": null, 00:20:41.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.086 "is_configured": false, 00:20:41.086 "data_offset": 0, 00:20:41.086 "data_size": 63488 00:20:41.086 }, 00:20:41.086 { 00:20:41.086 "name": "BaseBdev2", 00:20:41.086 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:41.086 "is_configured": true, 00:20:41.086 "data_offset": 2048, 00:20:41.086 "data_size": 63488 00:20:41.086 } 00:20:41.086 ] 00:20:41.086 }' 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.086 14:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.654 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.654 "name": "raid_bdev1", 00:20:41.654 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:41.654 "strip_size_kb": 0, 00:20:41.654 "state": "online", 00:20:41.654 "raid_level": "raid1", 00:20:41.654 "superblock": true, 00:20:41.654 "num_base_bdevs": 2, 00:20:41.654 "num_base_bdevs_discovered": 1, 00:20:41.654 "num_base_bdevs_operational": 1, 00:20:41.654 "base_bdevs_list": [ 00:20:41.654 { 00:20:41.654 "name": null, 00:20:41.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.654 "is_configured": false, 00:20:41.654 "data_offset": 0, 00:20:41.654 "data_size": 63488 00:20:41.654 }, 00:20:41.654 { 00:20:41.654 "name": "BaseBdev2", 00:20:41.654 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:41.655 "is_configured": true, 00:20:41.655 "data_offset": 2048, 00:20:41.655 "data_size": 63488 00:20:41.655 } 00:20:41.655 ] 00:20:41.655 }' 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:41.655 [2024-11-04 14:53:11.452389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.655 [2024-11-04 14:53:11.452652] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:41.655 [2024-11-04 14:53:11.452672] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:41.655 request: 00:20:41.655 { 00:20:41.655 "base_bdev": "BaseBdev1", 00:20:41.655 "raid_bdev": "raid_bdev1", 00:20:41.655 "method": "bdev_raid_add_base_bdev", 00:20:41.655 "req_id": 1 00:20:41.655 } 00:20:41.655 Got JSON-RPC error response 00:20:41.655 response: 00:20:41.655 { 00:20:41.655 "code": -22, 00:20:41.655 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:41.655 } 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.655 14:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.589 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:42.848 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.848 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.848 "name": "raid_bdev1", 00:20:42.848 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:42.848 "strip_size_kb": 0, 00:20:42.848 "state": "online", 00:20:42.848 "raid_level": "raid1", 00:20:42.848 "superblock": true, 00:20:42.848 "num_base_bdevs": 2, 00:20:42.848 "num_base_bdevs_discovered": 1, 00:20:42.848 "num_base_bdevs_operational": 1, 00:20:42.848 "base_bdevs_list": [ 00:20:42.848 { 00:20:42.848 "name": null, 00:20:42.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.848 "is_configured": false, 00:20:42.848 "data_offset": 0, 00:20:42.848 "data_size": 63488 00:20:42.848 }, 00:20:42.848 { 00:20:42.848 "name": "BaseBdev2", 00:20:42.848 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:42.848 "is_configured": true, 00:20:42.848 "data_offset": 2048, 00:20:42.848 "data_size": 63488 00:20:42.848 } 00:20:42.848 ] 00:20:42.848 }' 00:20:42.848 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.848 14:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.415 "name": "raid_bdev1", 00:20:43.415 "uuid": "bb934d68-ed25-416d-800b-2b12f45f5468", 00:20:43.415 "strip_size_kb": 0, 00:20:43.415 "state": "online", 00:20:43.415 "raid_level": "raid1", 00:20:43.415 "superblock": true, 00:20:43.415 "num_base_bdevs": 2, 00:20:43.415 "num_base_bdevs_discovered": 1, 00:20:43.415 "num_base_bdevs_operational": 1, 00:20:43.415 "base_bdevs_list": [ 00:20:43.415 { 00:20:43.415 "name": null, 00:20:43.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.415 "is_configured": false, 00:20:43.415 "data_offset": 0, 00:20:43.415 "data_size": 63488 00:20:43.415 }, 00:20:43.415 { 00:20:43.415 "name": "BaseBdev2", 00:20:43.415 "uuid": "4e531e3b-c358-5f1b-a74b-37ed103acbff", 00:20:43.415 "is_configured": true, 00:20:43.415 "data_offset": 2048, 00:20:43.415 "data_size": 63488 00:20:43.415 } 00:20:43.415 ] 00:20:43.415 }' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77231 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77231 ']' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77231 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77231 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:43.415 killing process with pid 77231 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77231' 00:20:43.415 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77231 00:20:43.415 Received shutdown signal, test time was about 18.445720 seconds 00:20:43.415 00:20:43.415 Latency(us) 00:20:43.415 [2024-11-04T14:53:13.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.415 [2024-11-04T14:53:13.307Z] =================================================================================================================== 00:20:43.415 [2024-11-04T14:53:13.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.416 14:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77231 00:20:43.416 [2024-11-04 14:53:13.234876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.416 [2024-11-04 14:53:13.235053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.416 [2024-11-04 14:53:13.235141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.416 [2024-11-04 14:53:13.235163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:43.674 [2024-11-04 14:53:13.440135] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:45.071 00:20:45.071 real 0m21.870s 00:20:45.071 user 0m29.784s 00:20:45.071 sys 0m2.105s 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:45.071 ************************************ 00:20:45.071 END TEST raid_rebuild_test_sb_io 00:20:45.071 ************************************ 00:20:45.071 14:53:14 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:20:45.071 14:53:14 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:45.071 14:53:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:45.071 14:53:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:45.071 14:53:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.071 ************************************ 00:20:45.071 START TEST raid_rebuild_test 00:20:45.071 ************************************ 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.071 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77938 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77938 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77938 ']' 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.072 14:53:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:45.072 Zero copy mechanism will not be used. 00:20:45.072 [2024-11-04 14:53:14.734688] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:20:45.072 [2024-11-04 14:53:14.734895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77938 ] 00:20:45.072 [2024-11-04 14:53:14.921583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.329 [2024-11-04 14:53:15.060821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.588 [2024-11-04 14:53:15.277707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.588 [2024-11-04 14:53:15.277784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.846 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:45.846 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:20:45.846 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.846 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:45.846 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.846 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 BaseBdev1_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 [2024-11-04 14:53:15.768694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.105 [2024-11-04 14:53:15.768809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.105 [2024-11-04 14:53:15.768842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:46.105 [2024-11-04 14:53:15.768862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.105 [2024-11-04 14:53:15.771648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.105 [2024-11-04 14:53:15.771709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.105 BaseBdev1 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 BaseBdev2_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 [2024-11-04 14:53:15.821461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:46.105 [2024-11-04 14:53:15.821555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.105 [2024-11-04 14:53:15.821584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:46.105 [2024-11-04 14:53:15.821616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.105 [2024-11-04 14:53:15.824417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.105 [2024-11-04 14:53:15.824472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:46.105 BaseBdev2 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 BaseBdev3_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 [2024-11-04 14:53:15.888081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:46.105 [2024-11-04 14:53:15.888160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.105 [2024-11-04 14:53:15.888192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:46.105 [2024-11-04 14:53:15.888211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.105 [2024-11-04 14:53:15.890923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.105 [2024-11-04 14:53:15.890972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:46.105 BaseBdev3 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 BaseBdev4_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 [2024-11-04 14:53:15.936616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:46.105 [2024-11-04 14:53:15.936690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.105 [2024-11-04 14:53:15.936728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:46.105 [2024-11-04 14:53:15.936746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.105 [2024-11-04 14:53:15.939481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.105 [2024-11-04 14:53:15.939531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:46.105 BaseBdev4 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 spare_malloc 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 spare_delay 00:20:46.105 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.364 14:53:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:46.364 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.364 14:53:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.364 [2024-11-04 14:53:15.998214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:46.364 [2024-11-04 14:53:15.998341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.364 [2024-11-04 14:53:15.998370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:46.364 [2024-11-04 14:53:15.998388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.364 [2024-11-04 14:53:16.001582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.365 [2024-11-04 14:53:16.001654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:46.365 spare 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.365 [2024-11-04 14:53:16.006369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.365 [2024-11-04 14:53:16.009487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.365 [2024-11-04 14:53:16.009635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.365 [2024-11-04 14:53:16.009728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:46.365 [2024-11-04 14:53:16.009854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:46.365 [2024-11-04 14:53:16.009876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:46.365 [2024-11-04 14:53:16.010239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:46.365 [2024-11-04 14:53:16.010606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:46.365 [2024-11-04 14:53:16.010633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:46.365 [2024-11-04 14:53:16.010941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.365 "name": "raid_bdev1", 00:20:46.365 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:46.365 "strip_size_kb": 0, 00:20:46.365 "state": "online", 00:20:46.365 "raid_level": "raid1", 00:20:46.365 "superblock": false, 00:20:46.365 "num_base_bdevs": 4, 00:20:46.365 "num_base_bdevs_discovered": 4, 00:20:46.365 "num_base_bdevs_operational": 4, 00:20:46.365 "base_bdevs_list": [ 00:20:46.365 { 00:20:46.365 "name": "BaseBdev1", 00:20:46.365 "uuid": "869ab457-e4a2-563c-b8f7-e0da2e0e328a", 00:20:46.365 "is_configured": true, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 65536 00:20:46.365 }, 00:20:46.365 { 00:20:46.365 "name": "BaseBdev2", 00:20:46.365 "uuid": "989627e0-8796-5d65-9964-522c3762e8a7", 00:20:46.365 "is_configured": true, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 65536 00:20:46.365 }, 00:20:46.365 { 00:20:46.365 "name": "BaseBdev3", 00:20:46.365 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:46.365 "is_configured": true, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 65536 00:20:46.365 }, 00:20:46.365 { 00:20:46.365 "name": "BaseBdev4", 00:20:46.365 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:46.365 "is_configured": true, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 65536 00:20:46.365 } 00:20:46.365 ] 00:20:46.365 }' 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.365 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.932 [2024-11-04 14:53:16.539729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.932 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:47.190 [2024-11-04 14:53:16.919329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:47.190 /dev/nbd0 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.190 1+0 records in 00:20:47.190 1+0 records out 00:20:47.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337141 s, 12.1 MB/s 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:47.190 14:53:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:55.292 65536+0 records in 00:20:55.292 65536+0 records out 00:20:55.292 33554432 bytes (34 MB, 32 MiB) copied, 7.83411 s, 4.3 MB/s 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.292 14:53:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:55.292 [2024-11-04 14:53:25.118956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.292 [2024-11-04 14:53:25.155090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.292 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.550 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.550 "name": "raid_bdev1", 00:20:55.550 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:55.550 "strip_size_kb": 0, 00:20:55.550 "state": "online", 00:20:55.550 "raid_level": "raid1", 00:20:55.550 "superblock": false, 00:20:55.550 "num_base_bdevs": 4, 00:20:55.550 "num_base_bdevs_discovered": 3, 00:20:55.550 "num_base_bdevs_operational": 3, 00:20:55.550 "base_bdevs_list": [ 00:20:55.550 { 00:20:55.550 "name": null, 00:20:55.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.550 "is_configured": false, 00:20:55.550 "data_offset": 0, 00:20:55.550 "data_size": 65536 00:20:55.550 }, 00:20:55.550 { 00:20:55.550 "name": "BaseBdev2", 00:20:55.550 "uuid": "989627e0-8796-5d65-9964-522c3762e8a7", 00:20:55.550 "is_configured": true, 00:20:55.550 "data_offset": 0, 00:20:55.550 "data_size": 65536 00:20:55.550 }, 00:20:55.550 { 00:20:55.550 "name": "BaseBdev3", 00:20:55.550 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:55.550 "is_configured": true, 00:20:55.550 "data_offset": 0, 00:20:55.550 "data_size": 65536 00:20:55.550 }, 00:20:55.550 { 00:20:55.550 "name": "BaseBdev4", 00:20:55.550 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:55.550 "is_configured": true, 00:20:55.550 "data_offset": 0, 00:20:55.550 "data_size": 65536 00:20:55.550 } 00:20:55.550 ] 00:20:55.550 }' 00:20:55.550 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.550 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.808 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.808 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.808 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.808 [2024-11-04 14:53:25.671247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.808 [2024-11-04 14:53:25.683312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:20:55.808 14:53:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.808 14:53:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:55.808 [2024-11-04 14:53:25.685475] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.180 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.180 "name": "raid_bdev1", 00:20:57.181 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:57.181 "strip_size_kb": 0, 00:20:57.181 "state": "online", 00:20:57.181 "raid_level": "raid1", 00:20:57.181 "superblock": false, 00:20:57.181 "num_base_bdevs": 4, 00:20:57.181 "num_base_bdevs_discovered": 4, 00:20:57.181 "num_base_bdevs_operational": 4, 00:20:57.181 "process": { 00:20:57.181 "type": "rebuild", 00:20:57.181 "target": "spare", 00:20:57.181 "progress": { 00:20:57.181 "blocks": 20480, 00:20:57.181 "percent": 31 00:20:57.181 } 00:20:57.181 }, 00:20:57.181 "base_bdevs_list": [ 00:20:57.181 { 00:20:57.181 "name": "spare", 00:20:57.181 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev2", 00:20:57.181 "uuid": "989627e0-8796-5d65-9964-522c3762e8a7", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev3", 00:20:57.181 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev4", 00:20:57.181 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 } 00:20:57.181 ] 00:20:57.181 }' 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.181 [2024-11-04 14:53:26.847450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.181 [2024-11-04 14:53:26.894840] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:57.181 [2024-11-04 14:53:26.894952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.181 [2024-11-04 14:53:26.894980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.181 [2024-11-04 14:53:26.894995] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.181 "name": "raid_bdev1", 00:20:57.181 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:57.181 "strip_size_kb": 0, 00:20:57.181 "state": "online", 00:20:57.181 "raid_level": "raid1", 00:20:57.181 "superblock": false, 00:20:57.181 "num_base_bdevs": 4, 00:20:57.181 "num_base_bdevs_discovered": 3, 00:20:57.181 "num_base_bdevs_operational": 3, 00:20:57.181 "base_bdevs_list": [ 00:20:57.181 { 00:20:57.181 "name": null, 00:20:57.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.181 "is_configured": false, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev2", 00:20:57.181 "uuid": "989627e0-8796-5d65-9964-522c3762e8a7", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev3", 00:20:57.181 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev4", 00:20:57.181 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 0, 00:20:57.181 "data_size": 65536 00:20:57.181 } 00:20:57.181 ] 00:20:57.181 }' 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.181 14:53:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.747 "name": "raid_bdev1", 00:20:57.747 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:57.747 "strip_size_kb": 0, 00:20:57.747 "state": "online", 00:20:57.747 "raid_level": "raid1", 00:20:57.747 "superblock": false, 00:20:57.747 "num_base_bdevs": 4, 00:20:57.747 "num_base_bdevs_discovered": 3, 00:20:57.747 "num_base_bdevs_operational": 3, 00:20:57.747 "base_bdevs_list": [ 00:20:57.747 { 00:20:57.747 "name": null, 00:20:57.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.747 "is_configured": false, 00:20:57.747 "data_offset": 0, 00:20:57.747 "data_size": 65536 00:20:57.747 }, 00:20:57.747 { 00:20:57.747 "name": "BaseBdev2", 00:20:57.747 "uuid": "989627e0-8796-5d65-9964-522c3762e8a7", 00:20:57.747 "is_configured": true, 00:20:57.747 "data_offset": 0, 00:20:57.747 "data_size": 65536 00:20:57.747 }, 00:20:57.747 { 00:20:57.747 "name": "BaseBdev3", 00:20:57.747 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:57.747 "is_configured": true, 00:20:57.747 "data_offset": 0, 00:20:57.747 "data_size": 65536 00:20:57.747 }, 00:20:57.747 { 00:20:57.747 "name": "BaseBdev4", 00:20:57.747 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:57.747 "is_configured": true, 00:20:57.747 "data_offset": 0, 00:20:57.747 "data_size": 65536 00:20:57.747 } 00:20:57.747 ] 00:20:57.747 }' 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.747 [2024-11-04 14:53:27.570996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.747 [2024-11-04 14:53:27.586171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.747 14:53:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:57.747 [2024-11-04 14:53:27.588958] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:59.122 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.122 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.122 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.122 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.122 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.122 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.123 "name": "raid_bdev1", 00:20:59.123 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:59.123 "strip_size_kb": 0, 00:20:59.123 "state": "online", 00:20:59.123 "raid_level": "raid1", 00:20:59.123 "superblock": false, 00:20:59.123 "num_base_bdevs": 4, 00:20:59.123 "num_base_bdevs_discovered": 4, 00:20:59.123 "num_base_bdevs_operational": 4, 00:20:59.123 "process": { 00:20:59.123 "type": "rebuild", 00:20:59.123 "target": "spare", 00:20:59.123 "progress": { 00:20:59.123 "blocks": 20480, 00:20:59.123 "percent": 31 00:20:59.123 } 00:20:59.123 }, 00:20:59.123 "base_bdevs_list": [ 00:20:59.123 { 00:20:59.123 "name": "spare", 00:20:59.123 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 }, 00:20:59.123 { 00:20:59.123 "name": "BaseBdev2", 00:20:59.123 "uuid": "989627e0-8796-5d65-9964-522c3762e8a7", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 }, 00:20:59.123 { 00:20:59.123 "name": "BaseBdev3", 00:20:59.123 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 }, 00:20:59.123 { 00:20:59.123 "name": "BaseBdev4", 00:20:59.123 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 } 00:20:59.123 ] 00:20:59.123 }' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.123 [2024-11-04 14:53:28.738922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:59.123 [2024-11-04 14:53:28.798816] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.123 "name": "raid_bdev1", 00:20:59.123 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:59.123 "strip_size_kb": 0, 00:20:59.123 "state": "online", 00:20:59.123 "raid_level": "raid1", 00:20:59.123 "superblock": false, 00:20:59.123 "num_base_bdevs": 4, 00:20:59.123 "num_base_bdevs_discovered": 3, 00:20:59.123 "num_base_bdevs_operational": 3, 00:20:59.123 "process": { 00:20:59.123 "type": "rebuild", 00:20:59.123 "target": "spare", 00:20:59.123 "progress": { 00:20:59.123 "blocks": 24576, 00:20:59.123 "percent": 37 00:20:59.123 } 00:20:59.123 }, 00:20:59.123 "base_bdevs_list": [ 00:20:59.123 { 00:20:59.123 "name": "spare", 00:20:59.123 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 }, 00:20:59.123 { 00:20:59.123 "name": null, 00:20:59.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.123 "is_configured": false, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 }, 00:20:59.123 { 00:20:59.123 "name": "BaseBdev3", 00:20:59.123 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 }, 00:20:59.123 { 00:20:59.123 "name": "BaseBdev4", 00:20:59.123 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:59.123 "is_configured": true, 00:20:59.123 "data_offset": 0, 00:20:59.123 "data_size": 65536 00:20:59.123 } 00:20:59.123 ] 00:20:59.123 }' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.123 14:53:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.382 14:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.382 "name": "raid_bdev1", 00:20:59.382 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:20:59.382 "strip_size_kb": 0, 00:20:59.382 "state": "online", 00:20:59.382 "raid_level": "raid1", 00:20:59.382 "superblock": false, 00:20:59.382 "num_base_bdevs": 4, 00:20:59.382 "num_base_bdevs_discovered": 3, 00:20:59.382 "num_base_bdevs_operational": 3, 00:20:59.382 "process": { 00:20:59.382 "type": "rebuild", 00:20:59.382 "target": "spare", 00:20:59.382 "progress": { 00:20:59.382 "blocks": 26624, 00:20:59.382 "percent": 40 00:20:59.382 } 00:20:59.382 }, 00:20:59.382 "base_bdevs_list": [ 00:20:59.382 { 00:20:59.382 "name": "spare", 00:20:59.382 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:20:59.382 "is_configured": true, 00:20:59.382 "data_offset": 0, 00:20:59.382 "data_size": 65536 00:20:59.382 }, 00:20:59.382 { 00:20:59.382 "name": null, 00:20:59.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.382 "is_configured": false, 00:20:59.382 "data_offset": 0, 00:20:59.382 "data_size": 65536 00:20:59.382 }, 00:20:59.382 { 00:20:59.382 "name": "BaseBdev3", 00:20:59.382 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:20:59.382 "is_configured": true, 00:20:59.382 "data_offset": 0, 00:20:59.382 "data_size": 65536 00:20:59.382 }, 00:20:59.382 { 00:20:59.382 "name": "BaseBdev4", 00:20:59.382 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:20:59.382 "is_configured": true, 00:20:59.382 "data_offset": 0, 00:20:59.382 "data_size": 65536 00:20:59.382 } 00:20:59.382 ] 00:20:59.382 }' 00:20:59.382 14:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.382 14:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.382 14:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.382 14:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.382 14:53:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.321 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.321 "name": "raid_bdev1", 00:21:00.321 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:21:00.321 "strip_size_kb": 0, 00:21:00.321 "state": "online", 00:21:00.321 "raid_level": "raid1", 00:21:00.321 "superblock": false, 00:21:00.321 "num_base_bdevs": 4, 00:21:00.321 "num_base_bdevs_discovered": 3, 00:21:00.321 "num_base_bdevs_operational": 3, 00:21:00.321 "process": { 00:21:00.321 "type": "rebuild", 00:21:00.321 "target": "spare", 00:21:00.321 "progress": { 00:21:00.321 "blocks": 51200, 00:21:00.321 "percent": 78 00:21:00.321 } 00:21:00.321 }, 00:21:00.321 "base_bdevs_list": [ 00:21:00.321 { 00:21:00.321 "name": "spare", 00:21:00.321 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:21:00.321 "is_configured": true, 00:21:00.321 "data_offset": 0, 00:21:00.321 "data_size": 65536 00:21:00.321 }, 00:21:00.321 { 00:21:00.321 "name": null, 00:21:00.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.321 "is_configured": false, 00:21:00.321 "data_offset": 0, 00:21:00.321 "data_size": 65536 00:21:00.321 }, 00:21:00.321 { 00:21:00.321 "name": "BaseBdev3", 00:21:00.321 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:21:00.321 "is_configured": true, 00:21:00.321 "data_offset": 0, 00:21:00.321 "data_size": 65536 00:21:00.321 }, 00:21:00.321 { 00:21:00.321 "name": "BaseBdev4", 00:21:00.321 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:21:00.321 "is_configured": true, 00:21:00.321 "data_offset": 0, 00:21:00.321 "data_size": 65536 00:21:00.321 } 00:21:00.321 ] 00:21:00.321 }' 00:21:00.322 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.580 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.580 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.580 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.580 14:53:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:01.146 [2024-11-04 14:53:30.815165] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:01.146 [2024-11-04 14:53:30.815921] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:01.146 [2024-11-04 14:53:30.816002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.712 "name": "raid_bdev1", 00:21:01.712 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:21:01.712 "strip_size_kb": 0, 00:21:01.712 "state": "online", 00:21:01.712 "raid_level": "raid1", 00:21:01.712 "superblock": false, 00:21:01.712 "num_base_bdevs": 4, 00:21:01.712 "num_base_bdevs_discovered": 3, 00:21:01.712 "num_base_bdevs_operational": 3, 00:21:01.712 "base_bdevs_list": [ 00:21:01.712 { 00:21:01.712 "name": "spare", 00:21:01.712 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:21:01.712 "is_configured": true, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 }, 00:21:01.712 { 00:21:01.712 "name": null, 00:21:01.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.712 "is_configured": false, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 }, 00:21:01.712 { 00:21:01.712 "name": "BaseBdev3", 00:21:01.712 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:21:01.712 "is_configured": true, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 }, 00:21:01.712 { 00:21:01.712 "name": "BaseBdev4", 00:21:01.712 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:21:01.712 "is_configured": true, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 } 00:21:01.712 ] 00:21:01.712 }' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.712 "name": "raid_bdev1", 00:21:01.712 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:21:01.712 "strip_size_kb": 0, 00:21:01.712 "state": "online", 00:21:01.712 "raid_level": "raid1", 00:21:01.712 "superblock": false, 00:21:01.712 "num_base_bdevs": 4, 00:21:01.712 "num_base_bdevs_discovered": 3, 00:21:01.712 "num_base_bdevs_operational": 3, 00:21:01.712 "base_bdevs_list": [ 00:21:01.712 { 00:21:01.712 "name": "spare", 00:21:01.712 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:21:01.712 "is_configured": true, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 }, 00:21:01.712 { 00:21:01.712 "name": null, 00:21:01.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.712 "is_configured": false, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 }, 00:21:01.712 { 00:21:01.712 "name": "BaseBdev3", 00:21:01.712 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:21:01.712 "is_configured": true, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 }, 00:21:01.712 { 00:21:01.712 "name": "BaseBdev4", 00:21:01.712 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:21:01.712 "is_configured": true, 00:21:01.712 "data_offset": 0, 00:21:01.712 "data_size": 65536 00:21:01.712 } 00:21:01.712 ] 00:21:01.712 }' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.712 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.971 "name": "raid_bdev1", 00:21:01.971 "uuid": "7e9324e1-152e-4f21-9cdb-d8b1c3f4256f", 00:21:01.971 "strip_size_kb": 0, 00:21:01.971 "state": "online", 00:21:01.971 "raid_level": "raid1", 00:21:01.971 "superblock": false, 00:21:01.971 "num_base_bdevs": 4, 00:21:01.971 "num_base_bdevs_discovered": 3, 00:21:01.971 "num_base_bdevs_operational": 3, 00:21:01.971 "base_bdevs_list": [ 00:21:01.971 { 00:21:01.971 "name": "spare", 00:21:01.971 "uuid": "0ab6996e-92c2-5f01-a298-10e29982acfe", 00:21:01.971 "is_configured": true, 00:21:01.971 "data_offset": 0, 00:21:01.971 "data_size": 65536 00:21:01.971 }, 00:21:01.971 { 00:21:01.971 "name": null, 00:21:01.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.971 "is_configured": false, 00:21:01.971 "data_offset": 0, 00:21:01.971 "data_size": 65536 00:21:01.971 }, 00:21:01.971 { 00:21:01.971 "name": "BaseBdev3", 00:21:01.971 "uuid": "3e205485-5a3f-5818-86ca-70411a05f4f5", 00:21:01.971 "is_configured": true, 00:21:01.971 "data_offset": 0, 00:21:01.971 "data_size": 65536 00:21:01.971 }, 00:21:01.971 { 00:21:01.971 "name": "BaseBdev4", 00:21:01.971 "uuid": "69ee20b2-1380-542c-b1d0-4c0fc8268801", 00:21:01.971 "is_configured": true, 00:21:01.971 "data_offset": 0, 00:21:01.971 "data_size": 65536 00:21:01.971 } 00:21:01.971 ] 00:21:01.971 }' 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.971 14:53:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.536 [2024-11-04 14:53:32.143009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.536 [2024-11-04 14:53:32.143067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.536 [2024-11-04 14:53:32.143167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.536 [2024-11-04 14:53:32.143296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.536 [2024-11-04 14:53:32.143316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:02.536 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:02.794 /dev/nbd0 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.794 1+0 records in 00:21:02.794 1+0 records out 00:21:02.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368747 s, 11.1 MB/s 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:02.794 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:03.051 /dev/nbd1 00:21:03.051 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:03.051 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:03.051 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:03.051 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.052 1+0 records in 00:21:03.052 1+0 records out 00:21:03.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436949 s, 9.4 MB/s 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:03.052 14:53:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.309 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.566 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77938 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77938 ']' 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77938 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77938 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:04.132 killing process with pid 77938 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77938' 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77938 00:21:04.132 Received shutdown signal, test time was about 60.000000 seconds 00:21:04.132 00:21:04.132 Latency(us) 00:21:04.132 [2024-11-04T14:53:34.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.132 [2024-11-04T14:53:34.024Z] =================================================================================================================== 00:21:04.132 [2024-11-04T14:53:34.024Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.132 [2024-11-04 14:53:33.820701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:04.132 14:53:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77938 00:21:04.390 [2024-11-04 14:53:34.249969] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.762 14:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:05.762 00:21:05.762 real 0m20.665s 00:21:05.762 user 0m23.154s 00:21:05.762 sys 0m3.744s 00:21:05.762 14:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:05.762 14:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.762 ************************************ 00:21:05.762 END TEST raid_rebuild_test 00:21:05.762 ************************************ 00:21:05.762 14:53:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:21:05.762 14:53:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:05.762 14:53:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:05.763 14:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.763 ************************************ 00:21:05.763 START TEST raid_rebuild_test_sb 00:21:05.763 ************************************ 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78418 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78418 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78418 ']' 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:05.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.763 14:53:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.763 [2024-11-04 14:53:35.458810] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:21:05.763 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:05.763 Zero copy mechanism will not be used. 00:21:05.763 [2024-11-04 14:53:35.459046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78418 ] 00:21:05.763 [2024-11-04 14:53:35.648625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.021 [2024-11-04 14:53:35.777936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.280 [2024-11-04 14:53:35.982912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.280 [2024-11-04 14:53:35.982968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 BaseBdev1_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 [2024-11-04 14:53:36.479849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:06.846 [2024-11-04 14:53:36.479941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.846 [2024-11-04 14:53:36.479973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:06.846 [2024-11-04 14:53:36.479993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.846 [2024-11-04 14:53:36.482747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.846 [2024-11-04 14:53:36.482796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.846 BaseBdev1 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 BaseBdev2_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 [2024-11-04 14:53:36.527829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:06.846 [2024-11-04 14:53:36.527907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.846 [2024-11-04 14:53:36.527933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:06.846 [2024-11-04 14:53:36.527953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.846 [2024-11-04 14:53:36.530696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.846 [2024-11-04 14:53:36.530748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:06.846 BaseBdev2 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 BaseBdev3_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 [2024-11-04 14:53:36.593239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:06.846 [2024-11-04 14:53:36.593314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.846 [2024-11-04 14:53:36.593345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:06.846 [2024-11-04 14:53:36.593364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.846 [2024-11-04 14:53:36.596048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.846 [2024-11-04 14:53:36.596097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:06.846 BaseBdev3 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 BaseBdev4_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 [2024-11-04 14:53:36.645501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:06.846 [2024-11-04 14:53:36.645570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.846 [2024-11-04 14:53:36.645609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:06.846 [2024-11-04 14:53:36.645631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.846 [2024-11-04 14:53:36.648288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.846 [2024-11-04 14:53:36.648343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:06.846 BaseBdev4 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 spare_malloc 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 spare_delay 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 [2024-11-04 14:53:36.705630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.847 [2024-11-04 14:53:36.705707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.847 [2024-11-04 14:53:36.705738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:06.847 [2024-11-04 14:53:36.705758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.847 [2024-11-04 14:53:36.708498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.847 [2024-11-04 14:53:36.708545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.847 spare 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.847 [2024-11-04 14:53:36.713688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.847 [2024-11-04 14:53:36.716102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.847 [2024-11-04 14:53:36.716202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:06.847 [2024-11-04 14:53:36.716325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:06.847 [2024-11-04 14:53:36.716568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:06.847 [2024-11-04 14:53:36.716605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:06.847 [2024-11-04 14:53:36.716914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:06.847 [2024-11-04 14:53:36.717150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:06.847 [2024-11-04 14:53:36.717175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:06.847 [2024-11-04 14:53:36.717384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.847 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.132 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.132 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.132 "name": "raid_bdev1", 00:21:07.132 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:07.132 "strip_size_kb": 0, 00:21:07.132 "state": "online", 00:21:07.132 "raid_level": "raid1", 00:21:07.132 "superblock": true, 00:21:07.132 "num_base_bdevs": 4, 00:21:07.132 "num_base_bdevs_discovered": 4, 00:21:07.132 "num_base_bdevs_operational": 4, 00:21:07.132 "base_bdevs_list": [ 00:21:07.132 { 00:21:07.132 "name": "BaseBdev1", 00:21:07.132 "uuid": "50f4cfec-d374-5893-b0e9-b3d1fb912874", 00:21:07.132 "is_configured": true, 00:21:07.132 "data_offset": 2048, 00:21:07.132 "data_size": 63488 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "name": "BaseBdev2", 00:21:07.132 "uuid": "f5c055aa-3366-5cdf-bced-dcf1b27d7cd0", 00:21:07.132 "is_configured": true, 00:21:07.132 "data_offset": 2048, 00:21:07.132 "data_size": 63488 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "name": "BaseBdev3", 00:21:07.132 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:07.132 "is_configured": true, 00:21:07.132 "data_offset": 2048, 00:21:07.132 "data_size": 63488 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "name": "BaseBdev4", 00:21:07.132 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:07.132 "is_configured": true, 00:21:07.132 "data_offset": 2048, 00:21:07.132 "data_size": 63488 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }' 00:21:07.132 14:53:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.132 14:53:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.390 [2024-11-04 14:53:37.210291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.390 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.648 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:07.648 [2024-11-04 14:53:37.533972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:07.907 /dev/nbd0 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.907 1+0 records in 00:21:07.907 1+0 records out 00:21:07.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240409 s, 17.0 MB/s 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:07.907 14:53:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:16.012 63488+0 records in 00:21:16.012 63488+0 records out 00:21:16.012 32505856 bytes (33 MB, 31 MiB) copied, 8.2275 s, 4.0 MB/s 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.012 14:53:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:16.270 [2024-11-04 14:53:46.108970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.270 [2024-11-04 14:53:46.137033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.270 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.528 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.528 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.528 "name": "raid_bdev1", 00:21:16.528 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:16.528 "strip_size_kb": 0, 00:21:16.528 "state": "online", 00:21:16.528 "raid_level": "raid1", 00:21:16.528 "superblock": true, 00:21:16.528 "num_base_bdevs": 4, 00:21:16.528 "num_base_bdevs_discovered": 3, 00:21:16.528 "num_base_bdevs_operational": 3, 00:21:16.528 "base_bdevs_list": [ 00:21:16.528 { 00:21:16.528 "name": null, 00:21:16.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.528 "is_configured": false, 00:21:16.528 "data_offset": 0, 00:21:16.528 "data_size": 63488 00:21:16.528 }, 00:21:16.528 { 00:21:16.528 "name": "BaseBdev2", 00:21:16.528 "uuid": "f5c055aa-3366-5cdf-bced-dcf1b27d7cd0", 00:21:16.528 "is_configured": true, 00:21:16.528 "data_offset": 2048, 00:21:16.528 "data_size": 63488 00:21:16.528 }, 00:21:16.528 { 00:21:16.528 "name": "BaseBdev3", 00:21:16.528 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:16.528 "is_configured": true, 00:21:16.528 "data_offset": 2048, 00:21:16.528 "data_size": 63488 00:21:16.528 }, 00:21:16.528 { 00:21:16.528 "name": "BaseBdev4", 00:21:16.529 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:16.529 "is_configured": true, 00:21:16.529 "data_offset": 2048, 00:21:16.529 "data_size": 63488 00:21:16.529 } 00:21:16.529 ] 00:21:16.529 }' 00:21:16.529 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.529 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.787 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:16.787 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.787 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.787 [2024-11-04 14:53:46.641195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.787 [2024-11-04 14:53:46.655445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:21:16.787 14:53:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.787 14:53:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:16.787 [2024-11-04 14:53:46.657909] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:18.160 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.160 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.160 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.160 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.161 "name": "raid_bdev1", 00:21:18.161 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:18.161 "strip_size_kb": 0, 00:21:18.161 "state": "online", 00:21:18.161 "raid_level": "raid1", 00:21:18.161 "superblock": true, 00:21:18.161 "num_base_bdevs": 4, 00:21:18.161 "num_base_bdevs_discovered": 4, 00:21:18.161 "num_base_bdevs_operational": 4, 00:21:18.161 "process": { 00:21:18.161 "type": "rebuild", 00:21:18.161 "target": "spare", 00:21:18.161 "progress": { 00:21:18.161 "blocks": 20480, 00:21:18.161 "percent": 32 00:21:18.161 } 00:21:18.161 }, 00:21:18.161 "base_bdevs_list": [ 00:21:18.161 { 00:21:18.161 "name": "spare", 00:21:18.161 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 }, 00:21:18.161 { 00:21:18.161 "name": "BaseBdev2", 00:21:18.161 "uuid": "f5c055aa-3366-5cdf-bced-dcf1b27d7cd0", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 }, 00:21:18.161 { 00:21:18.161 "name": "BaseBdev3", 00:21:18.161 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 }, 00:21:18.161 { 00:21:18.161 "name": "BaseBdev4", 00:21:18.161 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 } 00:21:18.161 ] 00:21:18.161 }' 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 [2024-11-04 14:53:47.827159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:18.161 [2024-11-04 14:53:47.866323] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:18.161 [2024-11-04 14:53:47.866407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.161 [2024-11-04 14:53:47.866438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:18.161 [2024-11-04 14:53:47.866463] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.161 "name": "raid_bdev1", 00:21:18.161 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:18.161 "strip_size_kb": 0, 00:21:18.161 "state": "online", 00:21:18.161 "raid_level": "raid1", 00:21:18.161 "superblock": true, 00:21:18.161 "num_base_bdevs": 4, 00:21:18.161 "num_base_bdevs_discovered": 3, 00:21:18.161 "num_base_bdevs_operational": 3, 00:21:18.161 "base_bdevs_list": [ 00:21:18.161 { 00:21:18.161 "name": null, 00:21:18.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.161 "is_configured": false, 00:21:18.161 "data_offset": 0, 00:21:18.161 "data_size": 63488 00:21:18.161 }, 00:21:18.161 { 00:21:18.161 "name": "BaseBdev2", 00:21:18.161 "uuid": "f5c055aa-3366-5cdf-bced-dcf1b27d7cd0", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 }, 00:21:18.161 { 00:21:18.161 "name": "BaseBdev3", 00:21:18.161 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 }, 00:21:18.161 { 00:21:18.161 "name": "BaseBdev4", 00:21:18.161 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:18.161 "is_configured": true, 00:21:18.161 "data_offset": 2048, 00:21:18.161 "data_size": 63488 00:21:18.161 } 00:21:18.161 ] 00:21:18.161 }' 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.161 14:53:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.736 "name": "raid_bdev1", 00:21:18.736 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:18.736 "strip_size_kb": 0, 00:21:18.736 "state": "online", 00:21:18.736 "raid_level": "raid1", 00:21:18.736 "superblock": true, 00:21:18.736 "num_base_bdevs": 4, 00:21:18.736 "num_base_bdevs_discovered": 3, 00:21:18.736 "num_base_bdevs_operational": 3, 00:21:18.736 "base_bdevs_list": [ 00:21:18.736 { 00:21:18.736 "name": null, 00:21:18.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.736 "is_configured": false, 00:21:18.736 "data_offset": 0, 00:21:18.736 "data_size": 63488 00:21:18.736 }, 00:21:18.736 { 00:21:18.736 "name": "BaseBdev2", 00:21:18.736 "uuid": "f5c055aa-3366-5cdf-bced-dcf1b27d7cd0", 00:21:18.736 "is_configured": true, 00:21:18.736 "data_offset": 2048, 00:21:18.736 "data_size": 63488 00:21:18.736 }, 00:21:18.736 { 00:21:18.736 "name": "BaseBdev3", 00:21:18.736 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:18.736 "is_configured": true, 00:21:18.736 "data_offset": 2048, 00:21:18.736 "data_size": 63488 00:21:18.736 }, 00:21:18.736 { 00:21:18.736 "name": "BaseBdev4", 00:21:18.736 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:18.736 "is_configured": true, 00:21:18.736 "data_offset": 2048, 00:21:18.736 "data_size": 63488 00:21:18.736 } 00:21:18.736 ] 00:21:18.736 }' 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.736 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:18.737 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:18.737 14:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.737 14:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.737 [2024-11-04 14:53:48.530123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.737 [2024-11-04 14:53:48.543437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:21:18.737 14:53:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.737 14:53:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:18.737 [2024-11-04 14:53:48.545926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.682 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.940 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.940 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.940 "name": "raid_bdev1", 00:21:19.940 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:19.940 "strip_size_kb": 0, 00:21:19.940 "state": "online", 00:21:19.940 "raid_level": "raid1", 00:21:19.940 "superblock": true, 00:21:19.940 "num_base_bdevs": 4, 00:21:19.940 "num_base_bdevs_discovered": 4, 00:21:19.940 "num_base_bdevs_operational": 4, 00:21:19.940 "process": { 00:21:19.940 "type": "rebuild", 00:21:19.940 "target": "spare", 00:21:19.940 "progress": { 00:21:19.940 "blocks": 20480, 00:21:19.940 "percent": 32 00:21:19.940 } 00:21:19.940 }, 00:21:19.940 "base_bdevs_list": [ 00:21:19.940 { 00:21:19.940 "name": "spare", 00:21:19.940 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:19.940 "is_configured": true, 00:21:19.940 "data_offset": 2048, 00:21:19.940 "data_size": 63488 00:21:19.940 }, 00:21:19.940 { 00:21:19.940 "name": "BaseBdev2", 00:21:19.940 "uuid": "f5c055aa-3366-5cdf-bced-dcf1b27d7cd0", 00:21:19.940 "is_configured": true, 00:21:19.940 "data_offset": 2048, 00:21:19.940 "data_size": 63488 00:21:19.940 }, 00:21:19.940 { 00:21:19.940 "name": "BaseBdev3", 00:21:19.940 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:19.940 "is_configured": true, 00:21:19.940 "data_offset": 2048, 00:21:19.940 "data_size": 63488 00:21:19.940 }, 00:21:19.941 { 00:21:19.941 "name": "BaseBdev4", 00:21:19.941 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:19.941 "is_configured": true, 00:21:19.941 "data_offset": 2048, 00:21:19.941 "data_size": 63488 00:21:19.941 } 00:21:19.941 ] 00:21:19.941 }' 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:19.941 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.941 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.941 [2024-11-04 14:53:49.727673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:20.199 [2024-11-04 14:53:49.855399] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.199 "name": "raid_bdev1", 00:21:20.199 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:20.199 "strip_size_kb": 0, 00:21:20.199 "state": "online", 00:21:20.199 "raid_level": "raid1", 00:21:20.199 "superblock": true, 00:21:20.199 "num_base_bdevs": 4, 00:21:20.199 "num_base_bdevs_discovered": 3, 00:21:20.199 "num_base_bdevs_operational": 3, 00:21:20.199 "process": { 00:21:20.199 "type": "rebuild", 00:21:20.199 "target": "spare", 00:21:20.199 "progress": { 00:21:20.199 "blocks": 24576, 00:21:20.199 "percent": 38 00:21:20.199 } 00:21:20.199 }, 00:21:20.199 "base_bdevs_list": [ 00:21:20.199 { 00:21:20.199 "name": "spare", 00:21:20.199 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:20.199 "is_configured": true, 00:21:20.199 "data_offset": 2048, 00:21:20.199 "data_size": 63488 00:21:20.199 }, 00:21:20.199 { 00:21:20.199 "name": null, 00:21:20.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.199 "is_configured": false, 00:21:20.199 "data_offset": 0, 00:21:20.199 "data_size": 63488 00:21:20.199 }, 00:21:20.199 { 00:21:20.199 "name": "BaseBdev3", 00:21:20.199 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:20.199 "is_configured": true, 00:21:20.199 "data_offset": 2048, 00:21:20.199 "data_size": 63488 00:21:20.199 }, 00:21:20.199 { 00:21:20.199 "name": "BaseBdev4", 00:21:20.199 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:20.199 "is_configured": true, 00:21:20.199 "data_offset": 2048, 00:21:20.199 "data_size": 63488 00:21:20.199 } 00:21:20.199 ] 00:21:20.199 }' 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.199 14:53:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=512 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.199 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.199 "name": "raid_bdev1", 00:21:20.199 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:20.199 "strip_size_kb": 0, 00:21:20.199 "state": "online", 00:21:20.199 "raid_level": "raid1", 00:21:20.199 "superblock": true, 00:21:20.199 "num_base_bdevs": 4, 00:21:20.199 "num_base_bdevs_discovered": 3, 00:21:20.199 "num_base_bdevs_operational": 3, 00:21:20.199 "process": { 00:21:20.199 "type": "rebuild", 00:21:20.199 "target": "spare", 00:21:20.199 "progress": { 00:21:20.199 "blocks": 26624, 00:21:20.199 "percent": 41 00:21:20.199 } 00:21:20.199 }, 00:21:20.199 "base_bdevs_list": [ 00:21:20.199 { 00:21:20.199 "name": "spare", 00:21:20.199 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:20.199 "is_configured": true, 00:21:20.199 "data_offset": 2048, 00:21:20.199 "data_size": 63488 00:21:20.199 }, 00:21:20.199 { 00:21:20.199 "name": null, 00:21:20.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.199 "is_configured": false, 00:21:20.199 "data_offset": 0, 00:21:20.199 "data_size": 63488 00:21:20.199 }, 00:21:20.199 { 00:21:20.199 "name": "BaseBdev3", 00:21:20.199 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:20.199 "is_configured": true, 00:21:20.199 "data_offset": 2048, 00:21:20.199 "data_size": 63488 00:21:20.199 }, 00:21:20.199 { 00:21:20.199 "name": "BaseBdev4", 00:21:20.199 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:20.199 "is_configured": true, 00:21:20.199 "data_offset": 2048, 00:21:20.199 "data_size": 63488 00:21:20.199 } 00:21:20.200 ] 00:21:20.200 }' 00:21:20.200 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.461 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.461 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.461 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.461 14:53:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.395 "name": "raid_bdev1", 00:21:21.395 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:21.395 "strip_size_kb": 0, 00:21:21.395 "state": "online", 00:21:21.395 "raid_level": "raid1", 00:21:21.395 "superblock": true, 00:21:21.395 "num_base_bdevs": 4, 00:21:21.395 "num_base_bdevs_discovered": 3, 00:21:21.395 "num_base_bdevs_operational": 3, 00:21:21.395 "process": { 00:21:21.395 "type": "rebuild", 00:21:21.395 "target": "spare", 00:21:21.395 "progress": { 00:21:21.395 "blocks": 51200, 00:21:21.395 "percent": 80 00:21:21.395 } 00:21:21.395 }, 00:21:21.395 "base_bdevs_list": [ 00:21:21.395 { 00:21:21.395 "name": "spare", 00:21:21.395 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:21.395 "is_configured": true, 00:21:21.395 "data_offset": 2048, 00:21:21.395 "data_size": 63488 00:21:21.395 }, 00:21:21.395 { 00:21:21.395 "name": null, 00:21:21.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.395 "is_configured": false, 00:21:21.395 "data_offset": 0, 00:21:21.395 "data_size": 63488 00:21:21.395 }, 00:21:21.395 { 00:21:21.395 "name": "BaseBdev3", 00:21:21.395 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:21.395 "is_configured": true, 00:21:21.395 "data_offset": 2048, 00:21:21.395 "data_size": 63488 00:21:21.395 }, 00:21:21.395 { 00:21:21.395 "name": "BaseBdev4", 00:21:21.395 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:21.395 "is_configured": true, 00:21:21.395 "data_offset": 2048, 00:21:21.395 "data_size": 63488 00:21:21.395 } 00:21:21.395 ] 00:21:21.395 }' 00:21:21.395 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.653 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.653 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.653 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.653 14:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:21.911 [2024-11-04 14:53:51.769630] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:21.911 [2024-11-04 14:53:51.769742] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:21.911 [2024-11-04 14:53:51.769893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.477 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.735 "name": "raid_bdev1", 00:21:22.735 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:22.735 "strip_size_kb": 0, 00:21:22.735 "state": "online", 00:21:22.735 "raid_level": "raid1", 00:21:22.735 "superblock": true, 00:21:22.735 "num_base_bdevs": 4, 00:21:22.735 "num_base_bdevs_discovered": 3, 00:21:22.735 "num_base_bdevs_operational": 3, 00:21:22.735 "base_bdevs_list": [ 00:21:22.735 { 00:21:22.735 "name": "spare", 00:21:22.735 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:22.735 "is_configured": true, 00:21:22.735 "data_offset": 2048, 00:21:22.735 "data_size": 63488 00:21:22.735 }, 00:21:22.735 { 00:21:22.735 "name": null, 00:21:22.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.735 "is_configured": false, 00:21:22.735 "data_offset": 0, 00:21:22.735 "data_size": 63488 00:21:22.735 }, 00:21:22.735 { 00:21:22.735 "name": "BaseBdev3", 00:21:22.735 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:22.735 "is_configured": true, 00:21:22.735 "data_offset": 2048, 00:21:22.735 "data_size": 63488 00:21:22.735 }, 00:21:22.735 { 00:21:22.735 "name": "BaseBdev4", 00:21:22.735 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:22.735 "is_configured": true, 00:21:22.735 "data_offset": 2048, 00:21:22.735 "data_size": 63488 00:21:22.735 } 00:21:22.735 ] 00:21:22.735 }' 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.735 "name": "raid_bdev1", 00:21:22.735 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:22.735 "strip_size_kb": 0, 00:21:22.735 "state": "online", 00:21:22.735 "raid_level": "raid1", 00:21:22.735 "superblock": true, 00:21:22.735 "num_base_bdevs": 4, 00:21:22.735 "num_base_bdevs_discovered": 3, 00:21:22.735 "num_base_bdevs_operational": 3, 00:21:22.735 "base_bdevs_list": [ 00:21:22.735 { 00:21:22.735 "name": "spare", 00:21:22.735 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:22.735 "is_configured": true, 00:21:22.735 "data_offset": 2048, 00:21:22.735 "data_size": 63488 00:21:22.735 }, 00:21:22.735 { 00:21:22.735 "name": null, 00:21:22.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.735 "is_configured": false, 00:21:22.735 "data_offset": 0, 00:21:22.735 "data_size": 63488 00:21:22.735 }, 00:21:22.735 { 00:21:22.735 "name": "BaseBdev3", 00:21:22.735 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:22.735 "is_configured": true, 00:21:22.735 "data_offset": 2048, 00:21:22.735 "data_size": 63488 00:21:22.735 }, 00:21:22.735 { 00:21:22.735 "name": "BaseBdev4", 00:21:22.735 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:22.735 "is_configured": true, 00:21:22.735 "data_offset": 2048, 00:21:22.735 "data_size": 63488 00:21:22.735 } 00:21:22.735 ] 00:21:22.735 }' 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:22.735 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.993 "name": "raid_bdev1", 00:21:22.993 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:22.993 "strip_size_kb": 0, 00:21:22.993 "state": "online", 00:21:22.993 "raid_level": "raid1", 00:21:22.993 "superblock": true, 00:21:22.993 "num_base_bdevs": 4, 00:21:22.993 "num_base_bdevs_discovered": 3, 00:21:22.993 "num_base_bdevs_operational": 3, 00:21:22.993 "base_bdevs_list": [ 00:21:22.993 { 00:21:22.993 "name": "spare", 00:21:22.993 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:22.993 "is_configured": true, 00:21:22.993 "data_offset": 2048, 00:21:22.993 "data_size": 63488 00:21:22.993 }, 00:21:22.993 { 00:21:22.993 "name": null, 00:21:22.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.993 "is_configured": false, 00:21:22.993 "data_offset": 0, 00:21:22.993 "data_size": 63488 00:21:22.993 }, 00:21:22.993 { 00:21:22.993 "name": "BaseBdev3", 00:21:22.993 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:22.993 "is_configured": true, 00:21:22.993 "data_offset": 2048, 00:21:22.993 "data_size": 63488 00:21:22.993 }, 00:21:22.993 { 00:21:22.993 "name": "BaseBdev4", 00:21:22.993 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:22.993 "is_configured": true, 00:21:22.993 "data_offset": 2048, 00:21:22.993 "data_size": 63488 00:21:22.993 } 00:21:22.993 ] 00:21:22.993 }' 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.993 14:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.610 [2024-11-04 14:53:53.163400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.610 [2024-11-04 14:53:53.163450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.610 [2024-11-04 14:53:53.163561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.610 [2024-11-04 14:53:53.163677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.610 [2024-11-04 14:53:53.163702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:23.610 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:23.868 /dev/nbd0 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.868 1+0 records in 00:21:23.868 1+0 records out 00:21:23.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311151 s, 13.2 MB/s 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:23.868 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:24.126 /dev/nbd1 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.126 1+0 records in 00:21:24.126 1+0 records out 00:21:24.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424046 s, 9.7 MB/s 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.126 14:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.383 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.641 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.898 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.898 [2024-11-04 14:53:54.707104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:24.898 [2024-11-04 14:53:54.707167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.898 [2024-11-04 14:53:54.707199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:24.898 [2024-11-04 14:53:54.707214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.898 [2024-11-04 14:53:54.710422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.898 [2024-11-04 14:53:54.710466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:24.899 [2024-11-04 14:53:54.710581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:24.899 [2024-11-04 14:53:54.710681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.899 [2024-11-04 14:53:54.710886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:24.899 [2024-11-04 14:53:54.711050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:24.899 spare 00:21:24.899 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.899 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:24.899 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.899 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 [2024-11-04 14:53:54.811231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:25.157 [2024-11-04 14:53:54.811271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:25.157 [2024-11-04 14:53:54.811701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:21:25.157 [2024-11-04 14:53:54.812006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:25.157 [2024-11-04 14:53:54.812035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:25.157 [2024-11-04 14:53:54.812234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.157 "name": "raid_bdev1", 00:21:25.157 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:25.157 "strip_size_kb": 0, 00:21:25.157 "state": "online", 00:21:25.157 "raid_level": "raid1", 00:21:25.157 "superblock": true, 00:21:25.157 "num_base_bdevs": 4, 00:21:25.157 "num_base_bdevs_discovered": 3, 00:21:25.157 "num_base_bdevs_operational": 3, 00:21:25.157 "base_bdevs_list": [ 00:21:25.157 { 00:21:25.157 "name": "spare", 00:21:25.157 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:25.157 "is_configured": true, 00:21:25.157 "data_offset": 2048, 00:21:25.157 "data_size": 63488 00:21:25.157 }, 00:21:25.157 { 00:21:25.157 "name": null, 00:21:25.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.157 "is_configured": false, 00:21:25.157 "data_offset": 2048, 00:21:25.157 "data_size": 63488 00:21:25.157 }, 00:21:25.157 { 00:21:25.157 "name": "BaseBdev3", 00:21:25.157 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:25.157 "is_configured": true, 00:21:25.157 "data_offset": 2048, 00:21:25.157 "data_size": 63488 00:21:25.157 }, 00:21:25.157 { 00:21:25.157 "name": "BaseBdev4", 00:21:25.157 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:25.157 "is_configured": true, 00:21:25.157 "data_offset": 2048, 00:21:25.157 "data_size": 63488 00:21:25.157 } 00:21:25.157 ] 00:21:25.157 }' 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.157 14:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.723 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.723 "name": "raid_bdev1", 00:21:25.723 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:25.723 "strip_size_kb": 0, 00:21:25.723 "state": "online", 00:21:25.723 "raid_level": "raid1", 00:21:25.723 "superblock": true, 00:21:25.723 "num_base_bdevs": 4, 00:21:25.723 "num_base_bdevs_discovered": 3, 00:21:25.723 "num_base_bdevs_operational": 3, 00:21:25.723 "base_bdevs_list": [ 00:21:25.723 { 00:21:25.723 "name": "spare", 00:21:25.723 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:25.723 "is_configured": true, 00:21:25.723 "data_offset": 2048, 00:21:25.723 "data_size": 63488 00:21:25.724 }, 00:21:25.724 { 00:21:25.724 "name": null, 00:21:25.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.724 "is_configured": false, 00:21:25.724 "data_offset": 2048, 00:21:25.724 "data_size": 63488 00:21:25.724 }, 00:21:25.724 { 00:21:25.724 "name": "BaseBdev3", 00:21:25.724 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:25.724 "is_configured": true, 00:21:25.724 "data_offset": 2048, 00:21:25.724 "data_size": 63488 00:21:25.724 }, 00:21:25.724 { 00:21:25.724 "name": "BaseBdev4", 00:21:25.724 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:25.724 "is_configured": true, 00:21:25.724 "data_offset": 2048, 00:21:25.724 "data_size": 63488 00:21:25.724 } 00:21:25.724 ] 00:21:25.724 }' 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.724 [2024-11-04 14:53:55.535492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.724 "name": "raid_bdev1", 00:21:25.724 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:25.724 "strip_size_kb": 0, 00:21:25.724 "state": "online", 00:21:25.724 "raid_level": "raid1", 00:21:25.724 "superblock": true, 00:21:25.724 "num_base_bdevs": 4, 00:21:25.724 "num_base_bdevs_discovered": 2, 00:21:25.724 "num_base_bdevs_operational": 2, 00:21:25.724 "base_bdevs_list": [ 00:21:25.724 { 00:21:25.724 "name": null, 00:21:25.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.724 "is_configured": false, 00:21:25.724 "data_offset": 0, 00:21:25.724 "data_size": 63488 00:21:25.724 }, 00:21:25.724 { 00:21:25.724 "name": null, 00:21:25.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.724 "is_configured": false, 00:21:25.724 "data_offset": 2048, 00:21:25.724 "data_size": 63488 00:21:25.724 }, 00:21:25.724 { 00:21:25.724 "name": "BaseBdev3", 00:21:25.724 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:25.724 "is_configured": true, 00:21:25.724 "data_offset": 2048, 00:21:25.724 "data_size": 63488 00:21:25.724 }, 00:21:25.724 { 00:21:25.724 "name": "BaseBdev4", 00:21:25.724 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:25.724 "is_configured": true, 00:21:25.724 "data_offset": 2048, 00:21:25.724 "data_size": 63488 00:21:25.724 } 00:21:25.724 ] 00:21:25.724 }' 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.724 14:53:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.290 14:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:26.291 14:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.291 14:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.291 [2024-11-04 14:53:56.063921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.291 [2024-11-04 14:53:56.064230] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:26.291 [2024-11-04 14:53:56.064252] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:26.291 [2024-11-04 14:53:56.064323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.291 [2024-11-04 14:53:56.078725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:21:26.291 14:53:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.291 14:53:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:26.291 [2024-11-04 14:53:56.081729] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.223 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.481 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.481 "name": "raid_bdev1", 00:21:27.481 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:27.481 "strip_size_kb": 0, 00:21:27.481 "state": "online", 00:21:27.481 "raid_level": "raid1", 00:21:27.481 "superblock": true, 00:21:27.481 "num_base_bdevs": 4, 00:21:27.481 "num_base_bdevs_discovered": 3, 00:21:27.481 "num_base_bdevs_operational": 3, 00:21:27.481 "process": { 00:21:27.481 "type": "rebuild", 00:21:27.481 "target": "spare", 00:21:27.481 "progress": { 00:21:27.481 "blocks": 20480, 00:21:27.481 "percent": 32 00:21:27.481 } 00:21:27.481 }, 00:21:27.481 "base_bdevs_list": [ 00:21:27.481 { 00:21:27.481 "name": "spare", 00:21:27.481 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:27.481 "is_configured": true, 00:21:27.481 "data_offset": 2048, 00:21:27.481 "data_size": 63488 00:21:27.481 }, 00:21:27.481 { 00:21:27.482 "name": null, 00:21:27.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.482 "is_configured": false, 00:21:27.482 "data_offset": 2048, 00:21:27.482 "data_size": 63488 00:21:27.482 }, 00:21:27.482 { 00:21:27.482 "name": "BaseBdev3", 00:21:27.482 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:27.482 "is_configured": true, 00:21:27.482 "data_offset": 2048, 00:21:27.482 "data_size": 63488 00:21:27.482 }, 00:21:27.482 { 00:21:27.482 "name": "BaseBdev4", 00:21:27.482 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:27.482 "is_configured": true, 00:21:27.482 "data_offset": 2048, 00:21:27.482 "data_size": 63488 00:21:27.482 } 00:21:27.482 ] 00:21:27.482 }' 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.482 [2024-11-04 14:53:57.247106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.482 [2024-11-04 14:53:57.290778] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.482 [2024-11-04 14:53:57.290869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.482 [2024-11-04 14:53:57.290900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.482 [2024-11-04 14:53:57.290912] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.482 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.740 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.740 "name": "raid_bdev1", 00:21:27.740 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:27.740 "strip_size_kb": 0, 00:21:27.740 "state": "online", 00:21:27.740 "raid_level": "raid1", 00:21:27.740 "superblock": true, 00:21:27.740 "num_base_bdevs": 4, 00:21:27.740 "num_base_bdevs_discovered": 2, 00:21:27.740 "num_base_bdevs_operational": 2, 00:21:27.740 "base_bdevs_list": [ 00:21:27.740 { 00:21:27.740 "name": null, 00:21:27.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.740 "is_configured": false, 00:21:27.740 "data_offset": 0, 00:21:27.740 "data_size": 63488 00:21:27.740 }, 00:21:27.740 { 00:21:27.740 "name": null, 00:21:27.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.740 "is_configured": false, 00:21:27.740 "data_offset": 2048, 00:21:27.740 "data_size": 63488 00:21:27.740 }, 00:21:27.740 { 00:21:27.740 "name": "BaseBdev3", 00:21:27.740 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:27.740 "is_configured": true, 00:21:27.740 "data_offset": 2048, 00:21:27.740 "data_size": 63488 00:21:27.740 }, 00:21:27.740 { 00:21:27.740 "name": "BaseBdev4", 00:21:27.740 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:27.740 "is_configured": true, 00:21:27.740 "data_offset": 2048, 00:21:27.740 "data_size": 63488 00:21:27.740 } 00:21:27.740 ] 00:21:27.740 }' 00:21:27.740 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.740 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.999 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:27.999 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.999 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.999 [2024-11-04 14:53:57.844511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:27.999 [2024-11-04 14:53:57.844592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.999 [2024-11-04 14:53:57.844636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:27.999 [2024-11-04 14:53:57.844652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.999 [2024-11-04 14:53:57.845310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.999 [2024-11-04 14:53:57.845350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:27.999 [2024-11-04 14:53:57.845486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:27.999 [2024-11-04 14:53:57.845508] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:27.999 [2024-11-04 14:53:57.845531] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:27.999 [2024-11-04 14:53:57.845568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.999 [2024-11-04 14:53:57.859124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:21:27.999 spare 00:21:27.999 14:53:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.999 14:53:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:27.999 [2024-11-04 14:53:57.861950] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.374 "name": "raid_bdev1", 00:21:29.374 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:29.374 "strip_size_kb": 0, 00:21:29.374 "state": "online", 00:21:29.374 "raid_level": "raid1", 00:21:29.374 "superblock": true, 00:21:29.374 "num_base_bdevs": 4, 00:21:29.374 "num_base_bdevs_discovered": 3, 00:21:29.374 "num_base_bdevs_operational": 3, 00:21:29.374 "process": { 00:21:29.374 "type": "rebuild", 00:21:29.374 "target": "spare", 00:21:29.374 "progress": { 00:21:29.374 "blocks": 20480, 00:21:29.374 "percent": 32 00:21:29.374 } 00:21:29.374 }, 00:21:29.374 "base_bdevs_list": [ 00:21:29.374 { 00:21:29.374 "name": "spare", 00:21:29.374 "uuid": "2d3d058f-af33-536b-9412-3ffe31e67839", 00:21:29.374 "is_configured": true, 00:21:29.374 "data_offset": 2048, 00:21:29.374 "data_size": 63488 00:21:29.374 }, 00:21:29.374 { 00:21:29.374 "name": null, 00:21:29.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.374 "is_configured": false, 00:21:29.374 "data_offset": 2048, 00:21:29.374 "data_size": 63488 00:21:29.374 }, 00:21:29.374 { 00:21:29.374 "name": "BaseBdev3", 00:21:29.374 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:29.374 "is_configured": true, 00:21:29.374 "data_offset": 2048, 00:21:29.374 "data_size": 63488 00:21:29.374 }, 00:21:29.374 { 00:21:29.374 "name": "BaseBdev4", 00:21:29.374 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:29.374 "is_configured": true, 00:21:29.374 "data_offset": 2048, 00:21:29.374 "data_size": 63488 00:21:29.374 } 00:21:29.374 ] 00:21:29.374 }' 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.374 14:53:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.374 [2024-11-04 14:53:59.028591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.374 [2024-11-04 14:53:59.071020] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:29.374 [2024-11-04 14:53:59.071118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.374 [2024-11-04 14:53:59.071143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.374 [2024-11-04 14:53:59.071158] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.374 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.374 "name": "raid_bdev1", 00:21:29.374 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:29.374 "strip_size_kb": 0, 00:21:29.374 "state": "online", 00:21:29.374 "raid_level": "raid1", 00:21:29.374 "superblock": true, 00:21:29.374 "num_base_bdevs": 4, 00:21:29.374 "num_base_bdevs_discovered": 2, 00:21:29.374 "num_base_bdevs_operational": 2, 00:21:29.374 "base_bdevs_list": [ 00:21:29.374 { 00:21:29.374 "name": null, 00:21:29.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.374 "is_configured": false, 00:21:29.375 "data_offset": 0, 00:21:29.375 "data_size": 63488 00:21:29.375 }, 00:21:29.375 { 00:21:29.375 "name": null, 00:21:29.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.375 "is_configured": false, 00:21:29.375 "data_offset": 2048, 00:21:29.375 "data_size": 63488 00:21:29.375 }, 00:21:29.375 { 00:21:29.375 "name": "BaseBdev3", 00:21:29.375 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:29.375 "is_configured": true, 00:21:29.375 "data_offset": 2048, 00:21:29.375 "data_size": 63488 00:21:29.375 }, 00:21:29.375 { 00:21:29.375 "name": "BaseBdev4", 00:21:29.375 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:29.375 "is_configured": true, 00:21:29.375 "data_offset": 2048, 00:21:29.375 "data_size": 63488 00:21:29.375 } 00:21:29.375 ] 00:21:29.375 }' 00:21:29.375 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.375 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.948 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.948 "name": "raid_bdev1", 00:21:29.948 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:29.948 "strip_size_kb": 0, 00:21:29.948 "state": "online", 00:21:29.948 "raid_level": "raid1", 00:21:29.948 "superblock": true, 00:21:29.948 "num_base_bdevs": 4, 00:21:29.948 "num_base_bdevs_discovered": 2, 00:21:29.948 "num_base_bdevs_operational": 2, 00:21:29.948 "base_bdevs_list": [ 00:21:29.948 { 00:21:29.948 "name": null, 00:21:29.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.948 "is_configured": false, 00:21:29.948 "data_offset": 0, 00:21:29.948 "data_size": 63488 00:21:29.948 }, 00:21:29.948 { 00:21:29.948 "name": null, 00:21:29.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.948 "is_configured": false, 00:21:29.948 "data_offset": 2048, 00:21:29.948 "data_size": 63488 00:21:29.948 }, 00:21:29.948 { 00:21:29.948 "name": "BaseBdev3", 00:21:29.948 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:29.948 "is_configured": true, 00:21:29.948 "data_offset": 2048, 00:21:29.948 "data_size": 63488 00:21:29.948 }, 00:21:29.948 { 00:21:29.948 "name": "BaseBdev4", 00:21:29.948 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:29.949 "is_configured": true, 00:21:29.949 "data_offset": 2048, 00:21:29.949 "data_size": 63488 00:21:29.949 } 00:21:29.949 ] 00:21:29.949 }' 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.949 [2024-11-04 14:53:59.763874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:29.949 [2024-11-04 14:53:59.763981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.949 [2024-11-04 14:53:59.764010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:29.949 [2024-11-04 14:53:59.764060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.949 [2024-11-04 14:53:59.764803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.949 [2024-11-04 14:53:59.764859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:29.949 [2024-11-04 14:53:59.764969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:29.949 [2024-11-04 14:53:59.765010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:29.949 [2024-11-04 14:53:59.765038] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:29.949 [2024-11-04 14:53:59.765067] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:29.949 BaseBdev1 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.949 14:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.885 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.143 "name": "raid_bdev1", 00:21:31.143 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:31.143 "strip_size_kb": 0, 00:21:31.143 "state": "online", 00:21:31.143 "raid_level": "raid1", 00:21:31.143 "superblock": true, 00:21:31.143 "num_base_bdevs": 4, 00:21:31.143 "num_base_bdevs_discovered": 2, 00:21:31.143 "num_base_bdevs_operational": 2, 00:21:31.143 "base_bdevs_list": [ 00:21:31.143 { 00:21:31.143 "name": null, 00:21:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.143 "is_configured": false, 00:21:31.143 "data_offset": 0, 00:21:31.143 "data_size": 63488 00:21:31.143 }, 00:21:31.143 { 00:21:31.143 "name": null, 00:21:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.143 "is_configured": false, 00:21:31.143 "data_offset": 2048, 00:21:31.143 "data_size": 63488 00:21:31.143 }, 00:21:31.143 { 00:21:31.143 "name": "BaseBdev3", 00:21:31.143 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:31.143 "is_configured": true, 00:21:31.143 "data_offset": 2048, 00:21:31.143 "data_size": 63488 00:21:31.143 }, 00:21:31.143 { 00:21:31.143 "name": "BaseBdev4", 00:21:31.143 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:31.143 "is_configured": true, 00:21:31.143 "data_offset": 2048, 00:21:31.143 "data_size": 63488 00:21:31.143 } 00:21:31.143 ] 00:21:31.143 }' 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.143 14:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.709 "name": "raid_bdev1", 00:21:31.709 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:31.709 "strip_size_kb": 0, 00:21:31.709 "state": "online", 00:21:31.709 "raid_level": "raid1", 00:21:31.709 "superblock": true, 00:21:31.709 "num_base_bdevs": 4, 00:21:31.709 "num_base_bdevs_discovered": 2, 00:21:31.709 "num_base_bdevs_operational": 2, 00:21:31.709 "base_bdevs_list": [ 00:21:31.709 { 00:21:31.709 "name": null, 00:21:31.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.709 "is_configured": false, 00:21:31.709 "data_offset": 0, 00:21:31.709 "data_size": 63488 00:21:31.709 }, 00:21:31.709 { 00:21:31.709 "name": null, 00:21:31.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.709 "is_configured": false, 00:21:31.709 "data_offset": 2048, 00:21:31.709 "data_size": 63488 00:21:31.709 }, 00:21:31.709 { 00:21:31.709 "name": "BaseBdev3", 00:21:31.709 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:31.709 "is_configured": true, 00:21:31.709 "data_offset": 2048, 00:21:31.709 "data_size": 63488 00:21:31.709 }, 00:21:31.709 { 00:21:31.709 "name": "BaseBdev4", 00:21:31.709 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:31.709 "is_configured": true, 00:21:31.709 "data_offset": 2048, 00:21:31.709 "data_size": 63488 00:21:31.709 } 00:21:31.709 ] 00:21:31.709 }' 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.709 [2024-11-04 14:54:01.476457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.709 [2024-11-04 14:54:01.476737] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:31.709 [2024-11-04 14:54:01.476759] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:31.709 request: 00:21:31.709 { 00:21:31.709 "base_bdev": "BaseBdev1", 00:21:31.709 "raid_bdev": "raid_bdev1", 00:21:31.709 "method": "bdev_raid_add_base_bdev", 00:21:31.709 "req_id": 1 00:21:31.709 } 00:21:31.709 Got JSON-RPC error response 00:21:31.709 response: 00:21:31.709 { 00:21:31.709 "code": -22, 00:21:31.709 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:31.709 } 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:31.709 14:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.642 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.643 14:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.900 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.900 "name": "raid_bdev1", 00:21:32.900 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:32.900 "strip_size_kb": 0, 00:21:32.900 "state": "online", 00:21:32.900 "raid_level": "raid1", 00:21:32.900 "superblock": true, 00:21:32.900 "num_base_bdevs": 4, 00:21:32.900 "num_base_bdevs_discovered": 2, 00:21:32.900 "num_base_bdevs_operational": 2, 00:21:32.900 "base_bdevs_list": [ 00:21:32.900 { 00:21:32.900 "name": null, 00:21:32.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.900 "is_configured": false, 00:21:32.900 "data_offset": 0, 00:21:32.900 "data_size": 63488 00:21:32.900 }, 00:21:32.900 { 00:21:32.900 "name": null, 00:21:32.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.900 "is_configured": false, 00:21:32.900 "data_offset": 2048, 00:21:32.900 "data_size": 63488 00:21:32.900 }, 00:21:32.900 { 00:21:32.900 "name": "BaseBdev3", 00:21:32.900 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:32.900 "is_configured": true, 00:21:32.900 "data_offset": 2048, 00:21:32.900 "data_size": 63488 00:21:32.900 }, 00:21:32.900 { 00:21:32.900 "name": "BaseBdev4", 00:21:32.900 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:32.900 "is_configured": true, 00:21:32.900 "data_offset": 2048, 00:21:32.900 "data_size": 63488 00:21:32.900 } 00:21:32.900 ] 00:21:32.900 }' 00:21:32.900 14:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.900 14:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.158 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.417 "name": "raid_bdev1", 00:21:33.417 "uuid": "ab94cbe3-c4a7-466a-8ef7-4751c6613446", 00:21:33.417 "strip_size_kb": 0, 00:21:33.417 "state": "online", 00:21:33.417 "raid_level": "raid1", 00:21:33.417 "superblock": true, 00:21:33.417 "num_base_bdevs": 4, 00:21:33.417 "num_base_bdevs_discovered": 2, 00:21:33.417 "num_base_bdevs_operational": 2, 00:21:33.417 "base_bdevs_list": [ 00:21:33.417 { 00:21:33.417 "name": null, 00:21:33.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.417 "is_configured": false, 00:21:33.417 "data_offset": 0, 00:21:33.417 "data_size": 63488 00:21:33.417 }, 00:21:33.417 { 00:21:33.417 "name": null, 00:21:33.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.417 "is_configured": false, 00:21:33.417 "data_offset": 2048, 00:21:33.417 "data_size": 63488 00:21:33.417 }, 00:21:33.417 { 00:21:33.417 "name": "BaseBdev3", 00:21:33.417 "uuid": "f60ce697-af96-5614-8afe-642691ecb04a", 00:21:33.417 "is_configured": true, 00:21:33.417 "data_offset": 2048, 00:21:33.417 "data_size": 63488 00:21:33.417 }, 00:21:33.417 { 00:21:33.417 "name": "BaseBdev4", 00:21:33.417 "uuid": "302dbd0e-9051-5250-9c31-d6fec4542d8d", 00:21:33.417 "is_configured": true, 00:21:33.417 "data_offset": 2048, 00:21:33.417 "data_size": 63488 00:21:33.417 } 00:21:33.417 ] 00:21:33.417 }' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78418 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78418 ']' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78418 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78418 00:21:33.417 killing process with pid 78418 00:21:33.417 Received shutdown signal, test time was about 60.000000 seconds 00:21:33.417 00:21:33.417 Latency(us) 00:21:33.417 [2024-11-04T14:54:03.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.417 [2024-11-04T14:54:03.309Z] =================================================================================================================== 00:21:33.417 [2024-11-04T14:54:03.309Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78418' 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78418 00:21:33.417 [2024-11-04 14:54:03.203451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.417 14:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78418 00:21:33.417 [2024-11-04 14:54:03.203615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.417 [2024-11-04 14:54:03.203716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.417 [2024-11-04 14:54:03.203739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:33.983 [2024-11-04 14:54:03.641114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:34.915 00:21:34.915 real 0m29.370s 00:21:34.915 user 0m35.134s 00:21:34.915 sys 0m4.073s 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.915 ************************************ 00:21:34.915 END TEST raid_rebuild_test_sb 00:21:34.915 ************************************ 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.915 14:54:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:21:34.915 14:54:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:34.915 14:54:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.915 14:54:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.915 ************************************ 00:21:34.915 START TEST raid_rebuild_test_io 00:21:34.915 ************************************ 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79214 00:21:34.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79214 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79214 ']' 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:34.915 14:54:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.173 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:35.173 Zero copy mechanism will not be used. 00:21:35.173 [2024-11-04 14:54:04.857169] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:21:35.173 [2024-11-04 14:54:04.857345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79214 ] 00:21:35.173 [2024-11-04 14:54:05.030811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.431 [2024-11-04 14:54:05.164991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.698 [2024-11-04 14:54:05.376153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.698 [2024-11-04 14:54:05.376193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 BaseBdev1_malloc 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-11-04 14:54:05.961262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:36.315 [2024-11-04 14:54:05.961349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.315 [2024-11-04 14:54:05.961385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:36.315 [2024-11-04 14:54:05.961404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.315 [2024-11-04 14:54:05.964266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.315 [2024-11-04 14:54:05.964317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:36.315 BaseBdev1 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 BaseBdev2_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-11-04 14:54:06.018838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:36.315 [2024-11-04 14:54:06.018930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.315 [2024-11-04 14:54:06.018960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:36.315 [2024-11-04 14:54:06.018981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.315 [2024-11-04 14:54:06.022117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.315 [2024-11-04 14:54:06.022176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:36.315 BaseBdev2 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 BaseBdev3_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-11-04 14:54:06.082890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:36.315 [2024-11-04 14:54:06.083161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.315 [2024-11-04 14:54:06.083213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:36.315 [2024-11-04 14:54:06.083270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.315 [2024-11-04 14:54:06.086106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.315 [2024-11-04 14:54:06.086158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:36.315 BaseBdev3 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 BaseBdev4_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-11-04 14:54:06.139422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:36.315 [2024-11-04 14:54:06.139498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.315 [2024-11-04 14:54:06.139528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:36.315 [2024-11-04 14:54:06.139547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.315 [2024-11-04 14:54:06.142352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.315 [2024-11-04 14:54:06.142403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:36.315 BaseBdev4 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 spare_malloc 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 spare_delay 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.315 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-11-04 14:54:06.199926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:36.315 [2024-11-04 14:54:06.200005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.315 [2024-11-04 14:54:06.200043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:36.315 [2024-11-04 14:54:06.200069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.315 [2024-11-04 14:54:06.202873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.315 [2024-11-04 14:54:06.203115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:36.574 spare 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.574 [2024-11-04 14:54:06.208067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.574 [2024-11-04 14:54:06.210506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:36.574 [2024-11-04 14:54:06.210604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:36.574 [2024-11-04 14:54:06.210686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:36.574 [2024-11-04 14:54:06.210798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:36.574 [2024-11-04 14:54:06.210821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:36.574 [2024-11-04 14:54:06.211147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:36.574 [2024-11-04 14:54:06.211404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:36.574 [2024-11-04 14:54:06.211425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:36.574 [2024-11-04 14:54:06.211626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.574 "name": "raid_bdev1", 00:21:36.574 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:36.574 "strip_size_kb": 0, 00:21:36.574 "state": "online", 00:21:36.574 "raid_level": "raid1", 00:21:36.574 "superblock": false, 00:21:36.574 "num_base_bdevs": 4, 00:21:36.574 "num_base_bdevs_discovered": 4, 00:21:36.574 "num_base_bdevs_operational": 4, 00:21:36.574 "base_bdevs_list": [ 00:21:36.574 { 00:21:36.574 "name": "BaseBdev1", 00:21:36.574 "uuid": "cd86a0c6-f6c4-5441-97b5-3b6ab572d5f3", 00:21:36.574 "is_configured": true, 00:21:36.574 "data_offset": 0, 00:21:36.574 "data_size": 65536 00:21:36.574 }, 00:21:36.574 { 00:21:36.574 "name": "BaseBdev2", 00:21:36.574 "uuid": "09caa526-2281-5e27-a3fe-c5fea39d07a3", 00:21:36.574 "is_configured": true, 00:21:36.574 "data_offset": 0, 00:21:36.574 "data_size": 65536 00:21:36.574 }, 00:21:36.574 { 00:21:36.574 "name": "BaseBdev3", 00:21:36.574 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:36.574 "is_configured": true, 00:21:36.574 "data_offset": 0, 00:21:36.574 "data_size": 65536 00:21:36.574 }, 00:21:36.574 { 00:21:36.574 "name": "BaseBdev4", 00:21:36.574 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:36.574 "is_configured": true, 00:21:36.574 "data_offset": 0, 00:21:36.574 "data_size": 65536 00:21:36.574 } 00:21:36.574 ] 00:21:36.574 }' 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.574 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.140 [2024-11-04 14:54:06.764777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.140 [2024-11-04 14:54:06.876259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.140 "name": "raid_bdev1", 00:21:37.140 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:37.140 "strip_size_kb": 0, 00:21:37.140 "state": "online", 00:21:37.140 "raid_level": "raid1", 00:21:37.140 "superblock": false, 00:21:37.140 "num_base_bdevs": 4, 00:21:37.140 "num_base_bdevs_discovered": 3, 00:21:37.140 "num_base_bdevs_operational": 3, 00:21:37.140 "base_bdevs_list": [ 00:21:37.140 { 00:21:37.140 "name": null, 00:21:37.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.140 "is_configured": false, 00:21:37.140 "data_offset": 0, 00:21:37.140 "data_size": 65536 00:21:37.140 }, 00:21:37.140 { 00:21:37.140 "name": "BaseBdev2", 00:21:37.140 "uuid": "09caa526-2281-5e27-a3fe-c5fea39d07a3", 00:21:37.140 "is_configured": true, 00:21:37.140 "data_offset": 0, 00:21:37.140 "data_size": 65536 00:21:37.140 }, 00:21:37.140 { 00:21:37.140 "name": "BaseBdev3", 00:21:37.140 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:37.140 "is_configured": true, 00:21:37.140 "data_offset": 0, 00:21:37.140 "data_size": 65536 00:21:37.140 }, 00:21:37.140 { 00:21:37.140 "name": "BaseBdev4", 00:21:37.140 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:37.140 "is_configured": true, 00:21:37.140 "data_offset": 0, 00:21:37.140 "data_size": 65536 00:21:37.140 } 00:21:37.140 ] 00:21:37.140 }' 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.140 14:54:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.140 [2024-11-04 14:54:07.016770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:37.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:37.140 Zero copy mechanism will not be used. 00:21:37.140 Running I/O for 60 seconds... 00:21:37.706 14:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:37.706 14:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.706 14:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.706 [2024-11-04 14:54:07.380026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:37.706 14:54:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.706 14:54:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:37.706 [2024-11-04 14:54:07.451371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:37.706 [2024-11-04 14:54:07.454285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:37.706 [2024-11-04 14:54:07.576131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:37.706 [2024-11-04 14:54:07.576870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:37.964 [2024-11-04 14:54:07.709733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:37.964 [2024-11-04 14:54:07.710738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:38.477 145.00 IOPS, 435.00 MiB/s [2024-11-04T14:54:08.369Z] [2024-11-04 14:54:08.215164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:38.477 [2024-11-04 14:54:08.216223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.735 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.735 "name": "raid_bdev1", 00:21:38.735 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:38.735 "strip_size_kb": 0, 00:21:38.735 "state": "online", 00:21:38.735 "raid_level": "raid1", 00:21:38.735 "superblock": false, 00:21:38.735 "num_base_bdevs": 4, 00:21:38.735 "num_base_bdevs_discovered": 4, 00:21:38.735 "num_base_bdevs_operational": 4, 00:21:38.735 "process": { 00:21:38.735 "type": "rebuild", 00:21:38.735 "target": "spare", 00:21:38.735 "progress": { 00:21:38.735 "blocks": 12288, 00:21:38.735 "percent": 18 00:21:38.735 } 00:21:38.735 }, 00:21:38.735 "base_bdevs_list": [ 00:21:38.735 { 00:21:38.735 "name": "spare", 00:21:38.735 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:38.735 "is_configured": true, 00:21:38.735 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 }, 00:21:38.736 { 00:21:38.736 "name": "BaseBdev2", 00:21:38.736 "uuid": "09caa526-2281-5e27-a3fe-c5fea39d07a3", 00:21:38.736 "is_configured": true, 00:21:38.736 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 }, 00:21:38.736 { 00:21:38.736 "name": "BaseBdev3", 00:21:38.736 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:38.736 "is_configured": true, 00:21:38.736 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 }, 00:21:38.736 { 00:21:38.736 "name": "BaseBdev4", 00:21:38.736 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:38.736 "is_configured": true, 00:21:38.736 "data_offset": 0, 00:21:38.736 "data_size": 65536 00:21:38.736 } 00:21:38.736 ] 00:21:38.736 }' 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.736 [2024-11-04 14:54:08.541087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:38.736 [2024-11-04 14:54:08.543086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.736 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.736 [2024-11-04 14:54:08.585012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:38.999 [2024-11-04 14:54:08.657020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:38.999 [2024-11-04 14:54:08.666263] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:38.999 [2024-11-04 14:54:08.688433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.999 [2024-11-04 14:54:08.688851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:38.999 [2024-11-04 14:54:08.688912] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:38.999 [2024-11-04 14:54:08.723204] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:38.999 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.999 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:38.999 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.000 "name": "raid_bdev1", 00:21:39.000 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:39.000 "strip_size_kb": 0, 00:21:39.000 "state": "online", 00:21:39.000 "raid_level": "raid1", 00:21:39.000 "superblock": false, 00:21:39.000 "num_base_bdevs": 4, 00:21:39.000 "num_base_bdevs_discovered": 3, 00:21:39.000 "num_base_bdevs_operational": 3, 00:21:39.000 "base_bdevs_list": [ 00:21:39.000 { 00:21:39.000 "name": null, 00:21:39.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.000 "is_configured": false, 00:21:39.000 "data_offset": 0, 00:21:39.000 "data_size": 65536 00:21:39.000 }, 00:21:39.000 { 00:21:39.000 "name": "BaseBdev2", 00:21:39.000 "uuid": "09caa526-2281-5e27-a3fe-c5fea39d07a3", 00:21:39.000 "is_configured": true, 00:21:39.000 "data_offset": 0, 00:21:39.000 "data_size": 65536 00:21:39.000 }, 00:21:39.000 { 00:21:39.000 "name": "BaseBdev3", 00:21:39.000 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:39.000 "is_configured": true, 00:21:39.000 "data_offset": 0, 00:21:39.000 "data_size": 65536 00:21:39.000 }, 00:21:39.000 { 00:21:39.000 "name": "BaseBdev4", 00:21:39.000 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:39.000 "is_configured": true, 00:21:39.000 "data_offset": 0, 00:21:39.000 "data_size": 65536 00:21:39.000 } 00:21:39.000 ] 00:21:39.000 }' 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.000 14:54:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.516 125.00 IOPS, 375.00 MiB/s [2024-11-04T14:54:09.408Z] 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.516 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.516 "name": "raid_bdev1", 00:21:39.516 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:39.516 "strip_size_kb": 0, 00:21:39.516 "state": "online", 00:21:39.517 "raid_level": "raid1", 00:21:39.517 "superblock": false, 00:21:39.517 "num_base_bdevs": 4, 00:21:39.517 "num_base_bdevs_discovered": 3, 00:21:39.517 "num_base_bdevs_operational": 3, 00:21:39.517 "base_bdevs_list": [ 00:21:39.517 { 00:21:39.517 "name": null, 00:21:39.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.517 "is_configured": false, 00:21:39.517 "data_offset": 0, 00:21:39.517 "data_size": 65536 00:21:39.517 }, 00:21:39.517 { 00:21:39.517 "name": "BaseBdev2", 00:21:39.517 "uuid": "09caa526-2281-5e27-a3fe-c5fea39d07a3", 00:21:39.517 "is_configured": true, 00:21:39.517 "data_offset": 0, 00:21:39.517 "data_size": 65536 00:21:39.517 }, 00:21:39.517 { 00:21:39.517 "name": "BaseBdev3", 00:21:39.517 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:39.517 "is_configured": true, 00:21:39.517 "data_offset": 0, 00:21:39.517 "data_size": 65536 00:21:39.517 }, 00:21:39.517 { 00:21:39.517 "name": "BaseBdev4", 00:21:39.517 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:39.517 "is_configured": true, 00:21:39.517 "data_offset": 0, 00:21:39.517 "data_size": 65536 00:21:39.517 } 00:21:39.517 ] 00:21:39.517 }' 00:21:39.517 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.517 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:39.517 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.775 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:39.775 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:39.775 14:54:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.775 14:54:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.775 [2024-11-04 14:54:09.466373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:39.775 14:54:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.775 14:54:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:39.775 [2024-11-04 14:54:09.528870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:39.775 [2024-11-04 14:54:09.531656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:39.775 [2024-11-04 14:54:09.642368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:39.775 [2024-11-04 14:54:09.643097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:40.032 [2024-11-04 14:54:09.856938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:40.032 [2024-11-04 14:54:09.857878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:40.548 126.33 IOPS, 379.00 MiB/s [2024-11-04T14:54:10.440Z] [2024-11-04 14:54:10.243573] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:40.808 "name": "raid_bdev1", 00:21:40.808 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:40.808 "strip_size_kb": 0, 00:21:40.808 "state": "online", 00:21:40.808 "raid_level": "raid1", 00:21:40.808 "superblock": false, 00:21:40.808 "num_base_bdevs": 4, 00:21:40.808 "num_base_bdevs_discovered": 4, 00:21:40.808 "num_base_bdevs_operational": 4, 00:21:40.808 "process": { 00:21:40.808 "type": "rebuild", 00:21:40.808 "target": "spare", 00:21:40.808 "progress": { 00:21:40.808 "blocks": 12288, 00:21:40.808 "percent": 18 00:21:40.808 } 00:21:40.808 }, 00:21:40.808 "base_bdevs_list": [ 00:21:40.808 { 00:21:40.808 "name": "spare", 00:21:40.808 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:40.808 "is_configured": true, 00:21:40.808 "data_offset": 0, 00:21:40.808 "data_size": 65536 00:21:40.808 }, 00:21:40.808 { 00:21:40.808 "name": "BaseBdev2", 00:21:40.808 "uuid": "09caa526-2281-5e27-a3fe-c5fea39d07a3", 00:21:40.808 "is_configured": true, 00:21:40.808 "data_offset": 0, 00:21:40.808 "data_size": 65536 00:21:40.808 }, 00:21:40.808 { 00:21:40.808 "name": "BaseBdev3", 00:21:40.808 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:40.808 "is_configured": true, 00:21:40.808 "data_offset": 0, 00:21:40.808 "data_size": 65536 00:21:40.808 }, 00:21:40.808 { 00:21:40.808 "name": "BaseBdev4", 00:21:40.808 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:40.808 "is_configured": true, 00:21:40.808 "data_offset": 0, 00:21:40.808 "data_size": 65536 00:21:40.808 } 00:21:40.808 ] 00:21:40.808 }' 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:40.808 [2024-11-04 14:54:10.629714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.808 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.808 [2024-11-04 14:54:10.668399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.067 [2024-11-04 14:54:10.739802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:41.067 [2024-11-04 14:54:10.740693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:41.067 [2024-11-04 14:54:10.850514] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:41.067 [2024-11-04 14:54:10.850581] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.067 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.067 "name": "raid_bdev1", 00:21:41.067 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:41.067 "strip_size_kb": 0, 00:21:41.067 "state": "online", 00:21:41.067 "raid_level": "raid1", 00:21:41.067 "superblock": false, 00:21:41.067 "num_base_bdevs": 4, 00:21:41.067 "num_base_bdevs_discovered": 3, 00:21:41.067 "num_base_bdevs_operational": 3, 00:21:41.067 "process": { 00:21:41.067 "type": "rebuild", 00:21:41.067 "target": "spare", 00:21:41.067 "progress": { 00:21:41.067 "blocks": 16384, 00:21:41.067 "percent": 25 00:21:41.067 } 00:21:41.067 }, 00:21:41.067 "base_bdevs_list": [ 00:21:41.067 { 00:21:41.067 "name": "spare", 00:21:41.068 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:41.068 "is_configured": true, 00:21:41.068 "data_offset": 0, 00:21:41.068 "data_size": 65536 00:21:41.068 }, 00:21:41.068 { 00:21:41.068 "name": null, 00:21:41.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.068 "is_configured": false, 00:21:41.068 "data_offset": 0, 00:21:41.068 "data_size": 65536 00:21:41.068 }, 00:21:41.068 { 00:21:41.068 "name": "BaseBdev3", 00:21:41.068 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:41.068 "is_configured": true, 00:21:41.068 "data_offset": 0, 00:21:41.068 "data_size": 65536 00:21:41.068 }, 00:21:41.068 { 00:21:41.068 "name": "BaseBdev4", 00:21:41.068 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:41.068 "is_configured": true, 00:21:41.068 "data_offset": 0, 00:21:41.068 "data_size": 65536 00:21:41.068 } 00:21:41.068 ] 00:21:41.068 }' 00:21:41.068 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.326 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.326 14:54:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=533 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.326 111.75 IOPS, 335.25 MiB/s [2024-11-04T14:54:11.218Z] 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.326 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.326 "name": "raid_bdev1", 00:21:41.326 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:41.326 "strip_size_kb": 0, 00:21:41.326 "state": "online", 00:21:41.326 "raid_level": "raid1", 00:21:41.326 "superblock": false, 00:21:41.326 "num_base_bdevs": 4, 00:21:41.326 "num_base_bdevs_discovered": 3, 00:21:41.326 "num_base_bdevs_operational": 3, 00:21:41.326 "process": { 00:21:41.326 "type": "rebuild", 00:21:41.326 "target": "spare", 00:21:41.326 "progress": { 00:21:41.326 "blocks": 18432, 00:21:41.326 "percent": 28 00:21:41.326 } 00:21:41.326 }, 00:21:41.326 "base_bdevs_list": [ 00:21:41.326 { 00:21:41.326 "name": "spare", 00:21:41.326 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:41.326 "is_configured": true, 00:21:41.326 "data_offset": 0, 00:21:41.326 "data_size": 65536 00:21:41.326 }, 00:21:41.326 { 00:21:41.326 "name": null, 00:21:41.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.326 "is_configured": false, 00:21:41.326 "data_offset": 0, 00:21:41.326 "data_size": 65536 00:21:41.326 }, 00:21:41.326 { 00:21:41.326 "name": "BaseBdev3", 00:21:41.327 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:41.327 "is_configured": true, 00:21:41.327 "data_offset": 0, 00:21:41.327 "data_size": 65536 00:21:41.327 }, 00:21:41.327 { 00:21:41.327 "name": "BaseBdev4", 00:21:41.327 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:41.327 "is_configured": true, 00:21:41.327 "data_offset": 0, 00:21:41.327 "data_size": 65536 00:21:41.327 } 00:21:41.327 ] 00:21:41.327 }' 00:21:41.327 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.327 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.327 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.327 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.327 14:54:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:41.585 [2024-11-04 14:54:11.235008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:41.585 [2024-11-04 14:54:11.449149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:41.843 [2024-11-04 14:54:11.659946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:41.843 [2024-11-04 14:54:11.660384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:42.408 101.00 IOPS, 303.00 MiB/s [2024-11-04T14:54:12.300Z] 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.408 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.408 "name": "raid_bdev1", 00:21:42.408 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:42.408 "strip_size_kb": 0, 00:21:42.408 "state": "online", 00:21:42.409 "raid_level": "raid1", 00:21:42.409 "superblock": false, 00:21:42.409 "num_base_bdevs": 4, 00:21:42.409 "num_base_bdevs_discovered": 3, 00:21:42.409 "num_base_bdevs_operational": 3, 00:21:42.409 "process": { 00:21:42.409 "type": "rebuild", 00:21:42.409 "target": "spare", 00:21:42.409 "progress": { 00:21:42.409 "blocks": 36864, 00:21:42.409 "percent": 56 00:21:42.409 } 00:21:42.409 }, 00:21:42.409 "base_bdevs_list": [ 00:21:42.409 { 00:21:42.409 "name": "spare", 00:21:42.409 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:42.409 "is_configured": true, 00:21:42.409 "data_offset": 0, 00:21:42.409 "data_size": 65536 00:21:42.409 }, 00:21:42.409 { 00:21:42.409 "name": null, 00:21:42.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.409 "is_configured": false, 00:21:42.409 "data_offset": 0, 00:21:42.409 "data_size": 65536 00:21:42.409 }, 00:21:42.409 { 00:21:42.409 "name": "BaseBdev3", 00:21:42.409 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:42.409 "is_configured": true, 00:21:42.409 "data_offset": 0, 00:21:42.409 "data_size": 65536 00:21:42.409 }, 00:21:42.409 { 00:21:42.409 "name": "BaseBdev4", 00:21:42.409 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:42.409 "is_configured": true, 00:21:42.409 "data_offset": 0, 00:21:42.409 "data_size": 65536 00:21:42.409 } 00:21:42.409 ] 00:21:42.409 }' 00:21:42.409 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.409 [2024-11-04 14:54:12.274475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:42.409 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:42.409 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.666 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.666 14:54:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:42.666 [2024-11-04 14:54:12.398500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:42.925 [2024-11-04 14:54:12.740754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:42.925 [2024-11-04 14:54:12.741744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:43.441 90.00 IOPS, 270.00 MiB/s [2024-11-04T14:54:13.333Z] [2024-11-04 14:54:13.207330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.699 "name": "raid_bdev1", 00:21:43.699 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:43.699 "strip_size_kb": 0, 00:21:43.699 "state": "online", 00:21:43.699 "raid_level": "raid1", 00:21:43.699 "superblock": false, 00:21:43.699 "num_base_bdevs": 4, 00:21:43.699 "num_base_bdevs_discovered": 3, 00:21:43.699 "num_base_bdevs_operational": 3, 00:21:43.699 "process": { 00:21:43.699 "type": "rebuild", 00:21:43.699 "target": "spare", 00:21:43.699 "progress": { 00:21:43.699 "blocks": 55296, 00:21:43.699 "percent": 84 00:21:43.699 } 00:21:43.699 }, 00:21:43.699 "base_bdevs_list": [ 00:21:43.699 { 00:21:43.699 "name": "spare", 00:21:43.699 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:43.699 "is_configured": true, 00:21:43.699 "data_offset": 0, 00:21:43.699 "data_size": 65536 00:21:43.699 }, 00:21:43.699 { 00:21:43.699 "name": null, 00:21:43.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.699 "is_configured": false, 00:21:43.699 "data_offset": 0, 00:21:43.699 "data_size": 65536 00:21:43.699 }, 00:21:43.699 { 00:21:43.699 "name": "BaseBdev3", 00:21:43.699 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:43.699 "is_configured": true, 00:21:43.699 "data_offset": 0, 00:21:43.699 "data_size": 65536 00:21:43.699 }, 00:21:43.699 { 00:21:43.699 "name": "BaseBdev4", 00:21:43.699 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:43.699 "is_configured": true, 00:21:43.699 "data_offset": 0, 00:21:43.699 "data_size": 65536 00:21:43.699 } 00:21:43.699 ] 00:21:43.699 }' 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.699 14:54:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:44.267 [2024-11-04 14:54:13.874452] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:44.267 [2024-11-04 14:54:13.974403] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:44.267 [2024-11-04 14:54:13.976959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.833 82.14 IOPS, 246.43 MiB/s [2024-11-04T14:54:14.725Z] 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.833 "name": "raid_bdev1", 00:21:44.833 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:44.833 "strip_size_kb": 0, 00:21:44.833 "state": "online", 00:21:44.833 "raid_level": "raid1", 00:21:44.833 "superblock": false, 00:21:44.833 "num_base_bdevs": 4, 00:21:44.833 "num_base_bdevs_discovered": 3, 00:21:44.833 "num_base_bdevs_operational": 3, 00:21:44.833 "base_bdevs_list": [ 00:21:44.833 { 00:21:44.833 "name": "spare", 00:21:44.833 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:44.833 "is_configured": true, 00:21:44.833 "data_offset": 0, 00:21:44.833 "data_size": 65536 00:21:44.833 }, 00:21:44.833 { 00:21:44.833 "name": null, 00:21:44.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.833 "is_configured": false, 00:21:44.833 "data_offset": 0, 00:21:44.833 "data_size": 65536 00:21:44.833 }, 00:21:44.833 { 00:21:44.833 "name": "BaseBdev3", 00:21:44.833 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:44.833 "is_configured": true, 00:21:44.833 "data_offset": 0, 00:21:44.833 "data_size": 65536 00:21:44.833 }, 00:21:44.833 { 00:21:44.833 "name": "BaseBdev4", 00:21:44.833 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:44.833 "is_configured": true, 00:21:44.833 "data_offset": 0, 00:21:44.833 "data_size": 65536 00:21:44.833 } 00:21:44.833 ] 00:21:44.833 }' 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:44.833 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.092 "name": "raid_bdev1", 00:21:45.092 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:45.092 "strip_size_kb": 0, 00:21:45.092 "state": "online", 00:21:45.092 "raid_level": "raid1", 00:21:45.092 "superblock": false, 00:21:45.092 "num_base_bdevs": 4, 00:21:45.092 "num_base_bdevs_discovered": 3, 00:21:45.092 "num_base_bdevs_operational": 3, 00:21:45.092 "base_bdevs_list": [ 00:21:45.092 { 00:21:45.092 "name": "spare", 00:21:45.092 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:45.092 "is_configured": true, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 }, 00:21:45.092 { 00:21:45.092 "name": null, 00:21:45.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.092 "is_configured": false, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 }, 00:21:45.092 { 00:21:45.092 "name": "BaseBdev3", 00:21:45.092 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:45.092 "is_configured": true, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 }, 00:21:45.092 { 00:21:45.092 "name": "BaseBdev4", 00:21:45.092 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:45.092 "is_configured": true, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 } 00:21:45.092 ] 00:21:45.092 }' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.092 "name": "raid_bdev1", 00:21:45.092 "uuid": "840b84d3-4366-45f9-b240-f061a4215c71", 00:21:45.092 "strip_size_kb": 0, 00:21:45.092 "state": "online", 00:21:45.092 "raid_level": "raid1", 00:21:45.092 "superblock": false, 00:21:45.092 "num_base_bdevs": 4, 00:21:45.092 "num_base_bdevs_discovered": 3, 00:21:45.092 "num_base_bdevs_operational": 3, 00:21:45.092 "base_bdevs_list": [ 00:21:45.092 { 00:21:45.092 "name": "spare", 00:21:45.092 "uuid": "a062d3fd-3e5d-50f9-ba53-6522e701cbea", 00:21:45.092 "is_configured": true, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 }, 00:21:45.092 { 00:21:45.092 "name": null, 00:21:45.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.092 "is_configured": false, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 }, 00:21:45.092 { 00:21:45.092 "name": "BaseBdev3", 00:21:45.092 "uuid": "7a358afd-c43c-597a-9d2d-3f85efbb7c71", 00:21:45.092 "is_configured": true, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 }, 00:21:45.092 { 00:21:45.092 "name": "BaseBdev4", 00:21:45.092 "uuid": "b0310577-d5dc-5c00-b15b-de4a14a0547e", 00:21:45.092 "is_configured": true, 00:21:45.092 "data_offset": 0, 00:21:45.092 "data_size": 65536 00:21:45.092 } 00:21:45.092 ] 00:21:45.092 }' 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.092 14:54:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.609 76.25 IOPS, 228.75 MiB/s [2024-11-04T14:54:15.501Z] 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.609 [2024-11-04 14:54:15.420090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.609 [2024-11-04 14:54:15.420151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.609 00:21:45.609 Latency(us) 00:21:45.609 [2024-11-04T14:54:15.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.609 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:45.609 raid_bdev1 : 8.44 74.05 222.15 0.00 0.00 17640.97 256.93 126782.37 00:21:45.609 [2024-11-04T14:54:15.501Z] =================================================================================================================== 00:21:45.609 [2024-11-04T14:54:15.501Z] Total : 74.05 222.15 0.00 0.00 17640.97 256.93 126782.37 00:21:45.609 [2024-11-04 14:54:15.481029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.609 [2024-11-04 14:54:15.481102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.609 [2024-11-04 14:54:15.481325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.609 [2024-11-04 14:54:15.481349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:45.609 { 00:21:45.609 "results": [ 00:21:45.609 { 00:21:45.609 "job": "raid_bdev1", 00:21:45.609 "core_mask": "0x1", 00:21:45.609 "workload": "randrw", 00:21:45.609 "percentage": 50, 00:21:45.609 "status": "finished", 00:21:45.609 "queue_depth": 2, 00:21:45.609 "io_size": 3145728, 00:21:45.609 "runtime": 8.440429, 00:21:45.609 "iops": 74.04836886845443, 00:21:45.609 "mibps": 222.1451066053633, 00:21:45.609 "io_failed": 0, 00:21:45.609 "io_timeout": 0, 00:21:45.609 "avg_latency_us": 17640.96707490909, 00:21:45.609 "min_latency_us": 256.9309090909091, 00:21:45.609 "max_latency_us": 126782.37090909091 00:21:45.609 } 00:21:45.609 ], 00:21:45.609 "core_count": 1 00:21:45.609 } 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.609 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:45.867 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:46.126 /dev/nbd0 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:46.126 1+0 records in 00:21:46.126 1+0 records out 00:21:46.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654867 s, 6.3 MB/s 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.126 14:54:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:46.384 /dev/nbd1 00:21:46.384 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:46.384 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:46.384 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:46.384 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:46.384 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:46.384 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:46.385 1+0 records in 00:21:46.385 1+0 records out 00:21:46.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612293 s, 6.7 MB/s 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.385 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:46.643 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.901 14:54:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:47.158 /dev/nbd1 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:47.158 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.416 1+0 records in 00:21:47.416 1+0 records out 00:21:47.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459839 s, 8.9 MB/s 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.416 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.674 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79214 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79214 ']' 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79214 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:47.931 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79214 00:21:47.931 killing process with pid 79214 00:21:47.931 Received shutdown signal, test time was about 10.756853 seconds 00:21:47.931 00:21:47.931 Latency(us) 00:21:47.931 [2024-11-04T14:54:17.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.931 [2024-11-04T14:54:17.823Z] =================================================================================================================== 00:21:47.931 [2024-11-04T14:54:17.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.932 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:47.932 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:47.932 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79214' 00:21:47.932 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79214 00:21:47.932 [2024-11-04 14:54:17.776406] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:47.932 14:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79214 00:21:48.496 [2024-11-04 14:54:18.146169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:49.427 ************************************ 00:21:49.427 END TEST raid_rebuild_test_io 00:21:49.427 ************************************ 00:21:49.427 00:21:49.427 real 0m14.489s 00:21:49.427 user 0m19.053s 00:21:49.427 sys 0m1.939s 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.427 14:54:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:49.427 14:54:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:49.427 14:54:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:49.427 14:54:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.427 ************************************ 00:21:49.427 START TEST raid_rebuild_test_sb_io 00:21:49.427 ************************************ 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79630 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79630 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79630 ']' 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:49.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:49.427 14:54:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.684 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:49.684 Zero copy mechanism will not be used. 00:21:49.684 [2024-11-04 14:54:19.427869] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:21:49.684 [2024-11-04 14:54:19.428051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79630 ] 00:21:49.948 [2024-11-04 14:54:19.611454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.948 [2024-11-04 14:54:19.741574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.207 [2024-11-04 14:54:19.944125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:50.207 [2024-11-04 14:54:19.944179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.784 BaseBdev1_malloc 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.784 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.784 [2024-11-04 14:54:20.409381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:50.784 [2024-11-04 14:54:20.409489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.785 [2024-11-04 14:54:20.409523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:50.785 [2024-11-04 14:54:20.409543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.785 [2024-11-04 14:54:20.412347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.785 [2024-11-04 14:54:20.412399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:50.785 BaseBdev1 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 BaseBdev2_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 [2024-11-04 14:54:20.457842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:50.785 [2024-11-04 14:54:20.457923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.785 [2024-11-04 14:54:20.457953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:50.785 [2024-11-04 14:54:20.457974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.785 [2024-11-04 14:54:20.460735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.785 [2024-11-04 14:54:20.460778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:50.785 BaseBdev2 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 BaseBdev3_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 [2024-11-04 14:54:20.516740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:50.785 [2024-11-04 14:54:20.516850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.785 [2024-11-04 14:54:20.516881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:50.785 [2024-11-04 14:54:20.516899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.785 [2024-11-04 14:54:20.520007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.785 [2024-11-04 14:54:20.520051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:50.785 BaseBdev3 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 BaseBdev4_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 [2024-11-04 14:54:20.567751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:50.785 [2024-11-04 14:54:20.567842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.785 [2024-11-04 14:54:20.567869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:50.785 [2024-11-04 14:54:20.567887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.785 [2024-11-04 14:54:20.570797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.785 [2024-11-04 14:54:20.570856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:50.785 BaseBdev4 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 spare_malloc 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 spare_delay 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 [2024-11-04 14:54:20.629119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:50.785 [2024-11-04 14:54:20.629202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.785 [2024-11-04 14:54:20.629250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:50.785 [2024-11-04 14:54:20.629278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.785 [2024-11-04 14:54:20.632384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.785 [2024-11-04 14:54:20.632423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:50.785 spare 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 [2024-11-04 14:54:20.637283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.785 [2024-11-04 14:54:20.639944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.785 [2024-11-04 14:54:20.640058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:50.785 [2024-11-04 14:54:20.640164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:50.785 [2024-11-04 14:54:20.640459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:50.785 [2024-11-04 14:54:20.640501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:50.785 [2024-11-04 14:54:20.640860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:50.785 [2024-11-04 14:54:20.641125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:50.785 [2024-11-04 14:54:20.641142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:50.785 [2024-11-04 14:54:20.641428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.043 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.043 "name": "raid_bdev1", 00:21:51.043 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:51.043 "strip_size_kb": 0, 00:21:51.043 "state": "online", 00:21:51.043 "raid_level": "raid1", 00:21:51.043 "superblock": true, 00:21:51.043 "num_base_bdevs": 4, 00:21:51.043 "num_base_bdevs_discovered": 4, 00:21:51.043 "num_base_bdevs_operational": 4, 00:21:51.043 "base_bdevs_list": [ 00:21:51.043 { 00:21:51.043 "name": "BaseBdev1", 00:21:51.043 "uuid": "3fc5e1ef-0fe5-57e1-a4c5-7eace270c4cf", 00:21:51.043 "is_configured": true, 00:21:51.043 "data_offset": 2048, 00:21:51.043 "data_size": 63488 00:21:51.043 }, 00:21:51.043 { 00:21:51.043 "name": "BaseBdev2", 00:21:51.043 "uuid": "20de60f1-d3ff-5dd5-a943-0ab807238427", 00:21:51.043 "is_configured": true, 00:21:51.043 "data_offset": 2048, 00:21:51.043 "data_size": 63488 00:21:51.043 }, 00:21:51.043 { 00:21:51.043 "name": "BaseBdev3", 00:21:51.043 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:51.043 "is_configured": true, 00:21:51.043 "data_offset": 2048, 00:21:51.043 "data_size": 63488 00:21:51.043 }, 00:21:51.043 { 00:21:51.043 "name": "BaseBdev4", 00:21:51.043 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:51.043 "is_configured": true, 00:21:51.043 "data_offset": 2048, 00:21:51.043 "data_size": 63488 00:21:51.043 } 00:21:51.043 ] 00:21:51.043 }' 00:21:51.043 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.043 14:54:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.301 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:51.301 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.301 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.301 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:51.301 [2024-11-04 14:54:21.150264] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.301 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.558 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.559 [2024-11-04 14:54:21.253722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.559 "name": "raid_bdev1", 00:21:51.559 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:51.559 "strip_size_kb": 0, 00:21:51.559 "state": "online", 00:21:51.559 "raid_level": "raid1", 00:21:51.559 "superblock": true, 00:21:51.559 "num_base_bdevs": 4, 00:21:51.559 "num_base_bdevs_discovered": 3, 00:21:51.559 "num_base_bdevs_operational": 3, 00:21:51.559 "base_bdevs_list": [ 00:21:51.559 { 00:21:51.559 "name": null, 00:21:51.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.559 "is_configured": false, 00:21:51.559 "data_offset": 0, 00:21:51.559 "data_size": 63488 00:21:51.559 }, 00:21:51.559 { 00:21:51.559 "name": "BaseBdev2", 00:21:51.559 "uuid": "20de60f1-d3ff-5dd5-a943-0ab807238427", 00:21:51.559 "is_configured": true, 00:21:51.559 "data_offset": 2048, 00:21:51.559 "data_size": 63488 00:21:51.559 }, 00:21:51.559 { 00:21:51.559 "name": "BaseBdev3", 00:21:51.559 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:51.559 "is_configured": true, 00:21:51.559 "data_offset": 2048, 00:21:51.559 "data_size": 63488 00:21:51.559 }, 00:21:51.559 { 00:21:51.559 "name": "BaseBdev4", 00:21:51.559 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:51.559 "is_configured": true, 00:21:51.559 "data_offset": 2048, 00:21:51.559 "data_size": 63488 00:21:51.559 } 00:21:51.559 ] 00:21:51.559 }' 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.559 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.559 [2024-11-04 14:54:21.382434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:51.559 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:51.559 Zero copy mechanism will not be used. 00:21:51.559 Running I/O for 60 seconds... 00:21:52.125 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.125 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.125 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 [2024-11-04 14:54:21.756429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.125 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.125 14:54:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:52.125 [2024-11-04 14:54:21.815631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:52.125 [2024-11-04 14:54:21.818370] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.692 179.00 IOPS, 537.00 MiB/s [2024-11-04T14:54:22.584Z] [2024-11-04 14:54:22.435582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:52.692 [2024-11-04 14:54:22.547827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:52.692 [2024-11-04 14:54:22.548875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.950 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.208 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.208 "name": "raid_bdev1", 00:21:53.208 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:53.208 "strip_size_kb": 0, 00:21:53.208 "state": "online", 00:21:53.208 "raid_level": "raid1", 00:21:53.208 "superblock": true, 00:21:53.208 "num_base_bdevs": 4, 00:21:53.208 "num_base_bdevs_discovered": 4, 00:21:53.208 "num_base_bdevs_operational": 4, 00:21:53.208 "process": { 00:21:53.208 "type": "rebuild", 00:21:53.208 "target": "spare", 00:21:53.208 "progress": { 00:21:53.208 "blocks": 12288, 00:21:53.208 "percent": 19 00:21:53.208 } 00:21:53.208 }, 00:21:53.208 "base_bdevs_list": [ 00:21:53.208 { 00:21:53.208 "name": "spare", 00:21:53.209 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:53.209 "is_configured": true, 00:21:53.209 "data_offset": 2048, 00:21:53.209 "data_size": 63488 00:21:53.209 }, 00:21:53.209 { 00:21:53.209 "name": "BaseBdev2", 00:21:53.209 "uuid": "20de60f1-d3ff-5dd5-a943-0ab807238427", 00:21:53.209 "is_configured": true, 00:21:53.209 "data_offset": 2048, 00:21:53.209 "data_size": 63488 00:21:53.209 }, 00:21:53.209 { 00:21:53.209 "name": "BaseBdev3", 00:21:53.209 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:53.209 "is_configured": true, 00:21:53.209 "data_offset": 2048, 00:21:53.209 "data_size": 63488 00:21:53.209 }, 00:21:53.209 { 00:21:53.209 "name": "BaseBdev4", 00:21:53.209 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:53.209 "is_configured": true, 00:21:53.209 "data_offset": 2048, 00:21:53.209 "data_size": 63488 00:21:53.209 } 00:21:53.209 ] 00:21:53.209 }' 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.209 14:54:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.209 [2024-11-04 14:54:22.976567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.209 [2024-11-04 14:54:23.018367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:53.209 [2024-11-04 14:54:23.076916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:53.209 [2024-11-04 14:54:23.082137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.209 [2024-11-04 14:54:23.082193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.209 [2024-11-04 14:54:23.082231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:53.512 [2024-11-04 14:54:23.136843] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.512 "name": "raid_bdev1", 00:21:53.512 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:53.512 "strip_size_kb": 0, 00:21:53.512 "state": "online", 00:21:53.512 "raid_level": "raid1", 00:21:53.512 "superblock": true, 00:21:53.512 "num_base_bdevs": 4, 00:21:53.512 "num_base_bdevs_discovered": 3, 00:21:53.512 "num_base_bdevs_operational": 3, 00:21:53.512 "base_bdevs_list": [ 00:21:53.512 { 00:21:53.512 "name": null, 00:21:53.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.512 "is_configured": false, 00:21:53.512 "data_offset": 0, 00:21:53.512 "data_size": 63488 00:21:53.512 }, 00:21:53.512 { 00:21:53.512 "name": "BaseBdev2", 00:21:53.512 "uuid": "20de60f1-d3ff-5dd5-a943-0ab807238427", 00:21:53.512 "is_configured": true, 00:21:53.512 "data_offset": 2048, 00:21:53.512 "data_size": 63488 00:21:53.512 }, 00:21:53.512 { 00:21:53.512 "name": "BaseBdev3", 00:21:53.512 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:53.512 "is_configured": true, 00:21:53.512 "data_offset": 2048, 00:21:53.512 "data_size": 63488 00:21:53.512 }, 00:21:53.512 { 00:21:53.512 "name": "BaseBdev4", 00:21:53.512 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:53.512 "is_configured": true, 00:21:53.512 "data_offset": 2048, 00:21:53.512 "data_size": 63488 00:21:53.512 } 00:21:53.512 ] 00:21:53.512 }' 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.512 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:54.030 145.50 IOPS, 436.50 MiB/s [2024-11-04T14:54:23.922Z] 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.030 "name": "raid_bdev1", 00:21:54.030 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:54.030 "strip_size_kb": 0, 00:21:54.030 "state": "online", 00:21:54.030 "raid_level": "raid1", 00:21:54.030 "superblock": true, 00:21:54.030 "num_base_bdevs": 4, 00:21:54.030 "num_base_bdevs_discovered": 3, 00:21:54.030 "num_base_bdevs_operational": 3, 00:21:54.030 "base_bdevs_list": [ 00:21:54.030 { 00:21:54.030 "name": null, 00:21:54.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.030 "is_configured": false, 00:21:54.030 "data_offset": 0, 00:21:54.030 "data_size": 63488 00:21:54.030 }, 00:21:54.030 { 00:21:54.030 "name": "BaseBdev2", 00:21:54.030 "uuid": "20de60f1-d3ff-5dd5-a943-0ab807238427", 00:21:54.030 "is_configured": true, 00:21:54.030 "data_offset": 2048, 00:21:54.030 "data_size": 63488 00:21:54.030 }, 00:21:54.030 { 00:21:54.030 "name": "BaseBdev3", 00:21:54.030 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:54.030 "is_configured": true, 00:21:54.030 "data_offset": 2048, 00:21:54.030 "data_size": 63488 00:21:54.030 }, 00:21:54.030 { 00:21:54.030 "name": "BaseBdev4", 00:21:54.030 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:54.030 "is_configured": true, 00:21:54.030 "data_offset": 2048, 00:21:54.030 "data_size": 63488 00:21:54.030 } 00:21:54.030 ] 00:21:54.030 }' 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:54.030 [2024-11-04 14:54:23.859730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.030 14:54:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:54.030 [2024-11-04 14:54:23.912626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:54.030 [2024-11-04 14:54:23.915667] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.290 [2024-11-04 14:54:24.051219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:54.548 [2024-11-04 14:54:24.193666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:54.548 [2024-11-04 14:54:24.194733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:54.806 145.00 IOPS, 435.00 MiB/s [2024-11-04T14:54:24.698Z] [2024-11-04 14:54:24.557190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:55.064 [2024-11-04 14:54:24.783784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.064 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.322 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.322 "name": "raid_bdev1", 00:21:55.322 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:55.322 "strip_size_kb": 0, 00:21:55.322 "state": "online", 00:21:55.322 "raid_level": "raid1", 00:21:55.322 "superblock": true, 00:21:55.322 "num_base_bdevs": 4, 00:21:55.322 "num_base_bdevs_discovered": 4, 00:21:55.322 "num_base_bdevs_operational": 4, 00:21:55.322 "process": { 00:21:55.322 "type": "rebuild", 00:21:55.322 "target": "spare", 00:21:55.322 "progress": { 00:21:55.322 "blocks": 10240, 00:21:55.322 "percent": 16 00:21:55.322 } 00:21:55.322 }, 00:21:55.322 "base_bdevs_list": [ 00:21:55.322 { 00:21:55.322 "name": "spare", 00:21:55.322 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:55.322 "is_configured": true, 00:21:55.322 "data_offset": 2048, 00:21:55.322 "data_size": 63488 00:21:55.322 }, 00:21:55.322 { 00:21:55.322 "name": "BaseBdev2", 00:21:55.322 "uuid": "20de60f1-d3ff-5dd5-a943-0ab807238427", 00:21:55.322 "is_configured": true, 00:21:55.322 "data_offset": 2048, 00:21:55.322 "data_size": 63488 00:21:55.322 }, 00:21:55.322 { 00:21:55.322 "name": "BaseBdev3", 00:21:55.322 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:55.322 "is_configured": true, 00:21:55.322 "data_offset": 2048, 00:21:55.322 "data_size": 63488 00:21:55.322 }, 00:21:55.322 { 00:21:55.322 "name": "BaseBdev4", 00:21:55.322 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:55.322 "is_configured": true, 00:21:55.322 "data_offset": 2048, 00:21:55.322 "data_size": 63488 00:21:55.322 } 00:21:55.322 ] 00:21:55.322 }' 00:21:55.322 14:54:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.322 [2024-11-04 14:54:25.064612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:55.322 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.322 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.322 [2024-11-04 14:54:25.075409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:55.581 124.75 IOPS, 374.25 MiB/s [2024-11-04T14:54:25.473Z] [2024-11-04 14:54:25.433319] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:55.581 [2024-11-04 14:54:25.433414] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.581 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.839 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.839 "name": "raid_bdev1", 00:21:55.839 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:55.839 "strip_size_kb": 0, 00:21:55.839 "state": "online", 00:21:55.839 "raid_level": "raid1", 00:21:55.839 "superblock": true, 00:21:55.839 "num_base_bdevs": 4, 00:21:55.839 "num_base_bdevs_discovered": 3, 00:21:55.839 "num_base_bdevs_operational": 3, 00:21:55.839 "process": { 00:21:55.839 "type": "rebuild", 00:21:55.839 "target": "spare", 00:21:55.839 "progress": { 00:21:55.839 "blocks": 16384, 00:21:55.839 "percent": 25 00:21:55.839 } 00:21:55.840 }, 00:21:55.840 "base_bdevs_list": [ 00:21:55.840 { 00:21:55.840 "name": "spare", 00:21:55.840 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:55.840 "is_configured": true, 00:21:55.840 "data_offset": 2048, 00:21:55.840 "data_size": 63488 00:21:55.840 }, 00:21:55.840 { 00:21:55.840 "name": null, 00:21:55.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.840 "is_configured": false, 00:21:55.840 "data_offset": 0, 00:21:55.840 "data_size": 63488 00:21:55.840 }, 00:21:55.840 { 00:21:55.840 "name": "BaseBdev3", 00:21:55.840 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:55.840 "is_configured": true, 00:21:55.840 "data_offset": 2048, 00:21:55.840 "data_size": 63488 00:21:55.840 }, 00:21:55.840 { 00:21:55.840 "name": "BaseBdev4", 00:21:55.840 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:55.840 "is_configured": true, 00:21:55.840 "data_offset": 2048, 00:21:55.840 "data_size": 63488 00:21:55.840 } 00:21:55.840 ] 00:21:55.840 }' 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=547 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.840 "name": "raid_bdev1", 00:21:55.840 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:55.840 "strip_size_kb": 0, 00:21:55.840 "state": "online", 00:21:55.840 "raid_level": "raid1", 00:21:55.840 "superblock": true, 00:21:55.840 "num_base_bdevs": 4, 00:21:55.840 "num_base_bdevs_discovered": 3, 00:21:55.840 "num_base_bdevs_operational": 3, 00:21:55.840 "process": { 00:21:55.840 "type": "rebuild", 00:21:55.840 "target": "spare", 00:21:55.840 "progress": { 00:21:55.840 "blocks": 18432, 00:21:55.840 "percent": 29 00:21:55.840 } 00:21:55.840 }, 00:21:55.840 "base_bdevs_list": [ 00:21:55.840 { 00:21:55.840 "name": "spare", 00:21:55.840 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:55.840 "is_configured": true, 00:21:55.840 "data_offset": 2048, 00:21:55.840 "data_size": 63488 00:21:55.840 }, 00:21:55.840 { 00:21:55.840 "name": null, 00:21:55.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.840 "is_configured": false, 00:21:55.840 "data_offset": 0, 00:21:55.840 "data_size": 63488 00:21:55.840 }, 00:21:55.840 { 00:21:55.840 "name": "BaseBdev3", 00:21:55.840 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:55.840 "is_configured": true, 00:21:55.840 "data_offset": 2048, 00:21:55.840 "data_size": 63488 00:21:55.840 }, 00:21:55.840 { 00:21:55.840 "name": "BaseBdev4", 00:21:55.840 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:55.840 "is_configured": true, 00:21:55.840 "data_offset": 2048, 00:21:55.840 "data_size": 63488 00:21:55.840 } 00:21:55.840 ] 00:21:55.840 }' 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.840 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.840 [2024-11-04 14:54:25.699107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:56.099 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.099 14:54:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.099 [2024-11-04 14:54:25.930707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:56.665 [2024-11-04 14:54:26.368295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:56.929 111.40 IOPS, 334.20 MiB/s [2024-11-04T14:54:26.821Z] [2024-11-04 14:54:26.610780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.929 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.929 "name": "raid_bdev1", 00:21:56.929 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:56.929 "strip_size_kb": 0, 00:21:56.929 "state": "online", 00:21:56.929 "raid_level": "raid1", 00:21:56.929 "superblock": true, 00:21:56.929 "num_base_bdevs": 4, 00:21:56.929 "num_base_bdevs_discovered": 3, 00:21:56.929 "num_base_bdevs_operational": 3, 00:21:56.929 "process": { 00:21:56.929 "type": "rebuild", 00:21:56.929 "target": "spare", 00:21:56.929 "progress": { 00:21:56.929 "blocks": 32768, 00:21:56.930 "percent": 51 00:21:56.930 } 00:21:56.930 }, 00:21:56.930 "base_bdevs_list": [ 00:21:56.930 { 00:21:56.930 "name": "spare", 00:21:56.930 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:56.930 "is_configured": true, 00:21:56.930 "data_offset": 2048, 00:21:56.930 "data_size": 63488 00:21:56.930 }, 00:21:56.930 { 00:21:56.930 "name": null, 00:21:56.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.930 "is_configured": false, 00:21:56.930 "data_offset": 0, 00:21:56.930 "data_size": 63488 00:21:56.930 }, 00:21:56.930 { 00:21:56.930 "name": "BaseBdev3", 00:21:56.930 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:56.930 "is_configured": true, 00:21:56.930 "data_offset": 2048, 00:21:56.930 "data_size": 63488 00:21:56.930 }, 00:21:56.930 { 00:21:56.930 "name": "BaseBdev4", 00:21:56.930 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:56.930 "is_configured": true, 00:21:56.930 "data_offset": 2048, 00:21:56.930 "data_size": 63488 00:21:56.930 } 00:21:56.930 ] 00:21:56.930 }' 00:21:56.930 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.188 [2024-11-04 14:54:26.833835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:57.188 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.188 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.188 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.188 14:54:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.012 103.50 IOPS, 310.50 MiB/s [2024-11-04T14:54:27.904Z] [2024-11-04 14:54:27.649904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:58.012 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.270 "name": "raid_bdev1", 00:21:58.270 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:58.270 "strip_size_kb": 0, 00:21:58.270 "state": "online", 00:21:58.270 "raid_level": "raid1", 00:21:58.270 "superblock": true, 00:21:58.270 "num_base_bdevs": 4, 00:21:58.270 "num_base_bdevs_discovered": 3, 00:21:58.270 "num_base_bdevs_operational": 3, 00:21:58.270 "process": { 00:21:58.270 "type": "rebuild", 00:21:58.270 "target": "spare", 00:21:58.270 "progress": { 00:21:58.270 "blocks": 49152, 00:21:58.270 "percent": 77 00:21:58.270 } 00:21:58.270 }, 00:21:58.270 "base_bdevs_list": [ 00:21:58.270 { 00:21:58.270 "name": "spare", 00:21:58.270 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:58.270 "is_configured": true, 00:21:58.270 "data_offset": 2048, 00:21:58.270 "data_size": 63488 00:21:58.270 }, 00:21:58.270 { 00:21:58.270 "name": null, 00:21:58.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.270 "is_configured": false, 00:21:58.270 "data_offset": 0, 00:21:58.270 "data_size": 63488 00:21:58.270 }, 00:21:58.270 { 00:21:58.270 "name": "BaseBdev3", 00:21:58.270 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:58.270 "is_configured": true, 00:21:58.270 "data_offset": 2048, 00:21:58.270 "data_size": 63488 00:21:58.270 }, 00:21:58.270 { 00:21:58.270 "name": "BaseBdev4", 00:21:58.270 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:58.270 "is_configured": true, 00:21:58.270 "data_offset": 2048, 00:21:58.270 "data_size": 63488 00:21:58.270 } 00:21:58.270 ] 00:21:58.270 }' 00:21:58.270 14:54:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.270 14:54:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.270 14:54:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.270 14:54:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.270 14:54:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.528 94.43 IOPS, 283.29 MiB/s [2024-11-04T14:54:28.420Z] [2024-11-04 14:54:28.416458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:58.786 [2024-11-04 14:54:28.650460] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:59.044 [2024-11-04 14:54:28.750531] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:59.044 [2024-11-04 14:54:28.753480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.302 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.303 "name": "raid_bdev1", 00:21:59.303 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:59.303 "strip_size_kb": 0, 00:21:59.303 "state": "online", 00:21:59.303 "raid_level": "raid1", 00:21:59.303 "superblock": true, 00:21:59.303 "num_base_bdevs": 4, 00:21:59.303 "num_base_bdevs_discovered": 3, 00:21:59.303 "num_base_bdevs_operational": 3, 00:21:59.303 "base_bdevs_list": [ 00:21:59.303 { 00:21:59.303 "name": "spare", 00:21:59.303 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:59.303 "is_configured": true, 00:21:59.303 "data_offset": 2048, 00:21:59.303 "data_size": 63488 00:21:59.303 }, 00:21:59.303 { 00:21:59.303 "name": null, 00:21:59.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.303 "is_configured": false, 00:21:59.303 "data_offset": 0, 00:21:59.303 "data_size": 63488 00:21:59.303 }, 00:21:59.303 { 00:21:59.303 "name": "BaseBdev3", 00:21:59.303 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:59.303 "is_configured": true, 00:21:59.303 "data_offset": 2048, 00:21:59.303 "data_size": 63488 00:21:59.303 }, 00:21:59.303 { 00:21:59.303 "name": "BaseBdev4", 00:21:59.303 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:59.303 "is_configured": true, 00:21:59.303 "data_offset": 2048, 00:21:59.303 "data_size": 63488 00:21:59.303 } 00:21:59.303 ] 00:21:59.303 }' 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:59.303 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.562 "name": "raid_bdev1", 00:21:59.562 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:59.562 "strip_size_kb": 0, 00:21:59.562 "state": "online", 00:21:59.562 "raid_level": "raid1", 00:21:59.562 "superblock": true, 00:21:59.562 "num_base_bdevs": 4, 00:21:59.562 "num_base_bdevs_discovered": 3, 00:21:59.562 "num_base_bdevs_operational": 3, 00:21:59.562 "base_bdevs_list": [ 00:21:59.562 { 00:21:59.562 "name": "spare", 00:21:59.562 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:59.562 "is_configured": true, 00:21:59.562 "data_offset": 2048, 00:21:59.562 "data_size": 63488 00:21:59.562 }, 00:21:59.562 { 00:21:59.562 "name": null, 00:21:59.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.562 "is_configured": false, 00:21:59.562 "data_offset": 0, 00:21:59.562 "data_size": 63488 00:21:59.562 }, 00:21:59.562 { 00:21:59.562 "name": "BaseBdev3", 00:21:59.562 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:59.562 "is_configured": true, 00:21:59.562 "data_offset": 2048, 00:21:59.562 "data_size": 63488 00:21:59.562 }, 00:21:59.562 { 00:21:59.562 "name": "BaseBdev4", 00:21:59.562 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:59.562 "is_configured": true, 00:21:59.562 "data_offset": 2048, 00:21:59.562 "data_size": 63488 00:21:59.562 } 00:21:59.562 ] 00:21:59.562 }' 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:59.562 86.75 IOPS, 260.25 MiB/s [2024-11-04T14:54:29.454Z] 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.562 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.820 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.820 "name": "raid_bdev1", 00:21:59.820 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:21:59.820 "strip_size_kb": 0, 00:21:59.820 "state": "online", 00:21:59.820 "raid_level": "raid1", 00:21:59.820 "superblock": true, 00:21:59.820 "num_base_bdevs": 4, 00:21:59.820 "num_base_bdevs_discovered": 3, 00:21:59.820 "num_base_bdevs_operational": 3, 00:21:59.820 "base_bdevs_list": [ 00:21:59.820 { 00:21:59.820 "name": "spare", 00:21:59.820 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:21:59.820 "is_configured": true, 00:21:59.820 "data_offset": 2048, 00:21:59.820 "data_size": 63488 00:21:59.820 }, 00:21:59.820 { 00:21:59.820 "name": null, 00:21:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.820 "is_configured": false, 00:21:59.820 "data_offset": 0, 00:21:59.820 "data_size": 63488 00:21:59.820 }, 00:21:59.820 { 00:21:59.820 "name": "BaseBdev3", 00:21:59.820 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:21:59.820 "is_configured": true, 00:21:59.820 "data_offset": 2048, 00:21:59.820 "data_size": 63488 00:21:59.820 }, 00:21:59.820 { 00:21:59.820 "name": "BaseBdev4", 00:21:59.820 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:21:59.820 "is_configured": true, 00:21:59.820 "data_offset": 2048, 00:21:59.820 "data_size": 63488 00:21:59.820 } 00:21:59.820 ] 00:21:59.820 }' 00:21:59.820 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.820 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:00.078 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:00.078 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.078 14:54:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:00.078 [2024-11-04 14:54:29.952104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.078 [2024-11-04 14:54:29.952144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:00.336 00:22:00.336 Latency(us) 00:22:00.336 [2024-11-04T14:54:30.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.336 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:00.336 raid_bdev1 : 8.59 83.31 249.92 0.00 0.00 17018.13 288.58 112483.61 00:22:00.336 [2024-11-04T14:54:30.228Z] =================================================================================================================== 00:22:00.336 [2024-11-04T14:54:30.228Z] Total : 83.31 249.92 0.00 0.00 17018.13 288.58 112483.61 00:22:00.336 [2024-11-04 14:54:29.999361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.336 [2024-11-04 14:54:29.999417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.336 [2024-11-04 14:54:29.999553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.336 [2024-11-04 14:54:29.999570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:00.336 { 00:22:00.336 "results": [ 00:22:00.336 { 00:22:00.336 "job": "raid_bdev1", 00:22:00.336 "core_mask": "0x1", 00:22:00.336 "workload": "randrw", 00:22:00.336 "percentage": 50, 00:22:00.336 "status": "finished", 00:22:00.336 "queue_depth": 2, 00:22:00.336 "io_size": 3145728, 00:22:00.336 "runtime": 8.594622, 00:22:00.336 "iops": 83.30791045842389, 00:22:00.336 "mibps": 249.92373137527167, 00:22:00.336 "io_failed": 0, 00:22:00.336 "io_timeout": 0, 00:22:00.337 "avg_latency_us": 17018.131112239716, 00:22:00.337 "min_latency_us": 288.58181818181816, 00:22:00.337 "max_latency_us": 112483.60727272727 00:22:00.337 } 00:22:00.337 ], 00:22:00.337 "core_count": 1 00:22:00.337 } 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.337 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:22:00.594 /dev/nbd0 00:22:00.594 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:00.594 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:00.594 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:00.594 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:22:00.594 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:00.594 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.595 1+0 records in 00:22:00.595 1+0 records out 00:22:00.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313327 s, 13.1 MB/s 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.595 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:01.161 /dev/nbd1 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.161 1+0 records in 00:22:01.161 1+0 records out 00:22:01.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320757 s, 12.8 MB/s 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:01.161 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:22:01.162 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.162 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.162 14:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.162 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.728 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:01.986 /dev/nbd1 00:22:01.986 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:01.986 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:01.986 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:01.986 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:22:01.986 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.987 1+0 records in 00:22:01.987 1+0 records out 00:22:01.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381974 s, 10.7 MB/s 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.987 14:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.245 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 [2024-11-04 14:54:32.441238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:02.812 [2024-11-04 14:54:32.441489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.812 [2024-11-04 14:54:32.441573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:02.812 [2024-11-04 14:54:32.441738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.812 [2024-11-04 14:54:32.444764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.812 [2024-11-04 14:54:32.444940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:02.812 [2024-11-04 14:54:32.445080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:02.812 [2024-11-04 14:54:32.445165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.812 [2024-11-04 14:54:32.445388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.812 [2024-11-04 14:54:32.445578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:02.812 spare 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 [2024-11-04 14:54:32.545745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:02.812 [2024-11-04 14:54:32.545809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:02.812 [2024-11-04 14:54:32.546301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:22:02.812 [2024-11-04 14:54:32.546599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:02.812 [2024-11-04 14:54:32.546635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:02.812 [2024-11-04 14:54:32.546882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.812 "name": "raid_bdev1", 00:22:02.812 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:02.812 "strip_size_kb": 0, 00:22:02.812 "state": "online", 00:22:02.812 "raid_level": "raid1", 00:22:02.812 "superblock": true, 00:22:02.812 "num_base_bdevs": 4, 00:22:02.812 "num_base_bdevs_discovered": 3, 00:22:02.812 "num_base_bdevs_operational": 3, 00:22:02.812 "base_bdevs_list": [ 00:22:02.812 { 00:22:02.812 "name": "spare", 00:22:02.812 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:22:02.812 "is_configured": true, 00:22:02.812 "data_offset": 2048, 00:22:02.812 "data_size": 63488 00:22:02.812 }, 00:22:02.812 { 00:22:02.812 "name": null, 00:22:02.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.812 "is_configured": false, 00:22:02.812 "data_offset": 2048, 00:22:02.812 "data_size": 63488 00:22:02.812 }, 00:22:02.812 { 00:22:02.812 "name": "BaseBdev3", 00:22:02.812 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:02.812 "is_configured": true, 00:22:02.812 "data_offset": 2048, 00:22:02.812 "data_size": 63488 00:22:02.812 }, 00:22:02.812 { 00:22:02.812 "name": "BaseBdev4", 00:22:02.812 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:02.812 "is_configured": true, 00:22:02.812 "data_offset": 2048, 00:22:02.812 "data_size": 63488 00:22:02.812 } 00:22:02.812 ] 00:22:02.812 }' 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.812 14:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.379 "name": "raid_bdev1", 00:22:03.379 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:03.379 "strip_size_kb": 0, 00:22:03.379 "state": "online", 00:22:03.379 "raid_level": "raid1", 00:22:03.379 "superblock": true, 00:22:03.379 "num_base_bdevs": 4, 00:22:03.379 "num_base_bdevs_discovered": 3, 00:22:03.379 "num_base_bdevs_operational": 3, 00:22:03.379 "base_bdevs_list": [ 00:22:03.379 { 00:22:03.379 "name": "spare", 00:22:03.379 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:22:03.379 "is_configured": true, 00:22:03.379 "data_offset": 2048, 00:22:03.379 "data_size": 63488 00:22:03.379 }, 00:22:03.379 { 00:22:03.379 "name": null, 00:22:03.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.379 "is_configured": false, 00:22:03.379 "data_offset": 2048, 00:22:03.379 "data_size": 63488 00:22:03.379 }, 00:22:03.379 { 00:22:03.379 "name": "BaseBdev3", 00:22:03.379 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:03.379 "is_configured": true, 00:22:03.379 "data_offset": 2048, 00:22:03.379 "data_size": 63488 00:22:03.379 }, 00:22:03.379 { 00:22:03.379 "name": "BaseBdev4", 00:22:03.379 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:03.379 "is_configured": true, 00:22:03.379 "data_offset": 2048, 00:22:03.379 "data_size": 63488 00:22:03.379 } 00:22:03.379 ] 00:22:03.379 }' 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.379 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.637 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.637 [2024-11-04 14:54:33.325953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.638 "name": "raid_bdev1", 00:22:03.638 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:03.638 "strip_size_kb": 0, 00:22:03.638 "state": "online", 00:22:03.638 "raid_level": "raid1", 00:22:03.638 "superblock": true, 00:22:03.638 "num_base_bdevs": 4, 00:22:03.638 "num_base_bdevs_discovered": 2, 00:22:03.638 "num_base_bdevs_operational": 2, 00:22:03.638 "base_bdevs_list": [ 00:22:03.638 { 00:22:03.638 "name": null, 00:22:03.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.638 "is_configured": false, 00:22:03.638 "data_offset": 0, 00:22:03.638 "data_size": 63488 00:22:03.638 }, 00:22:03.638 { 00:22:03.638 "name": null, 00:22:03.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.638 "is_configured": false, 00:22:03.638 "data_offset": 2048, 00:22:03.638 "data_size": 63488 00:22:03.638 }, 00:22:03.638 { 00:22:03.638 "name": "BaseBdev3", 00:22:03.638 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:03.638 "is_configured": true, 00:22:03.638 "data_offset": 2048, 00:22:03.638 "data_size": 63488 00:22:03.638 }, 00:22:03.638 { 00:22:03.638 "name": "BaseBdev4", 00:22:03.638 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:03.638 "is_configured": true, 00:22:03.638 "data_offset": 2048, 00:22:03.638 "data_size": 63488 00:22:03.638 } 00:22:03.638 ] 00:22:03.638 }' 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.638 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:04.215 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:04.215 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.215 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:04.215 [2024-11-04 14:54:33.874195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.215 [2024-11-04 14:54:33.874502] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:22:04.215 [2024-11-04 14:54:33.874538] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:04.215 [2024-11-04 14:54:33.874592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.215 [2024-11-04 14:54:33.888749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:22:04.215 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.215 14:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:04.215 [2024-11-04 14:54:33.891343] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.164 "name": "raid_bdev1", 00:22:05.164 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:05.164 "strip_size_kb": 0, 00:22:05.164 "state": "online", 00:22:05.164 "raid_level": "raid1", 00:22:05.164 "superblock": true, 00:22:05.164 "num_base_bdevs": 4, 00:22:05.164 "num_base_bdevs_discovered": 3, 00:22:05.164 "num_base_bdevs_operational": 3, 00:22:05.164 "process": { 00:22:05.164 "type": "rebuild", 00:22:05.164 "target": "spare", 00:22:05.164 "progress": { 00:22:05.164 "blocks": 20480, 00:22:05.164 "percent": 32 00:22:05.164 } 00:22:05.164 }, 00:22:05.164 "base_bdevs_list": [ 00:22:05.164 { 00:22:05.164 "name": "spare", 00:22:05.164 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:22:05.164 "is_configured": true, 00:22:05.164 "data_offset": 2048, 00:22:05.164 "data_size": 63488 00:22:05.164 }, 00:22:05.164 { 00:22:05.164 "name": null, 00:22:05.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.164 "is_configured": false, 00:22:05.164 "data_offset": 2048, 00:22:05.164 "data_size": 63488 00:22:05.164 }, 00:22:05.164 { 00:22:05.164 "name": "BaseBdev3", 00:22:05.164 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:05.164 "is_configured": true, 00:22:05.164 "data_offset": 2048, 00:22:05.164 "data_size": 63488 00:22:05.164 }, 00:22:05.164 { 00:22:05.164 "name": "BaseBdev4", 00:22:05.164 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:05.164 "is_configured": true, 00:22:05.164 "data_offset": 2048, 00:22:05.164 "data_size": 63488 00:22:05.164 } 00:22:05.164 ] 00:22:05.164 }' 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.164 14:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.164 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.164 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:05.164 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.164 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.164 [2024-11-04 14:54:35.048823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.423 [2024-11-04 14:54:35.100704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:05.423 [2024-11-04 14:54:35.100795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.423 [2024-11-04 14:54:35.100836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.423 [2024-11-04 14:54:35.100848] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.423 "name": "raid_bdev1", 00:22:05.423 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:05.423 "strip_size_kb": 0, 00:22:05.423 "state": "online", 00:22:05.423 "raid_level": "raid1", 00:22:05.423 "superblock": true, 00:22:05.423 "num_base_bdevs": 4, 00:22:05.423 "num_base_bdevs_discovered": 2, 00:22:05.423 "num_base_bdevs_operational": 2, 00:22:05.423 "base_bdevs_list": [ 00:22:05.423 { 00:22:05.423 "name": null, 00:22:05.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.423 "is_configured": false, 00:22:05.423 "data_offset": 0, 00:22:05.423 "data_size": 63488 00:22:05.423 }, 00:22:05.423 { 00:22:05.423 "name": null, 00:22:05.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.423 "is_configured": false, 00:22:05.423 "data_offset": 2048, 00:22:05.423 "data_size": 63488 00:22:05.423 }, 00:22:05.423 { 00:22:05.423 "name": "BaseBdev3", 00:22:05.423 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:05.423 "is_configured": true, 00:22:05.423 "data_offset": 2048, 00:22:05.423 "data_size": 63488 00:22:05.423 }, 00:22:05.423 { 00:22:05.423 "name": "BaseBdev4", 00:22:05.423 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:05.423 "is_configured": true, 00:22:05.423 "data_offset": 2048, 00:22:05.423 "data_size": 63488 00:22:05.423 } 00:22:05.423 ] 00:22:05.423 }' 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.423 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.990 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:05.990 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.990 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:05.990 [2024-11-04 14:54:35.663833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:05.990 [2024-11-04 14:54:35.663926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.990 [2024-11-04 14:54:35.663968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:05.990 [2024-11-04 14:54:35.663988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.990 [2024-11-04 14:54:35.664609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.990 [2024-11-04 14:54:35.664643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:05.990 [2024-11-04 14:54:35.664776] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:05.990 [2024-11-04 14:54:35.664797] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:22:05.990 [2024-11-04 14:54:35.664814] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:05.990 [2024-11-04 14:54:35.664857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:05.990 [2024-11-04 14:54:35.678834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:22:05.990 spare 00:22:05.990 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.990 14:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:05.990 [2024-11-04 14:54:35.681338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.924 "name": "raid_bdev1", 00:22:06.924 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:06.924 "strip_size_kb": 0, 00:22:06.924 "state": "online", 00:22:06.924 "raid_level": "raid1", 00:22:06.924 "superblock": true, 00:22:06.924 "num_base_bdevs": 4, 00:22:06.924 "num_base_bdevs_discovered": 3, 00:22:06.924 "num_base_bdevs_operational": 3, 00:22:06.924 "process": { 00:22:06.924 "type": "rebuild", 00:22:06.924 "target": "spare", 00:22:06.924 "progress": { 00:22:06.924 "blocks": 20480, 00:22:06.924 "percent": 32 00:22:06.924 } 00:22:06.924 }, 00:22:06.924 "base_bdevs_list": [ 00:22:06.924 { 00:22:06.924 "name": "spare", 00:22:06.924 "uuid": "300334f7-7dd6-5a07-b382-76c8e0618fd1", 00:22:06.924 "is_configured": true, 00:22:06.924 "data_offset": 2048, 00:22:06.924 "data_size": 63488 00:22:06.924 }, 00:22:06.924 { 00:22:06.924 "name": null, 00:22:06.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.924 "is_configured": false, 00:22:06.924 "data_offset": 2048, 00:22:06.924 "data_size": 63488 00:22:06.924 }, 00:22:06.924 { 00:22:06.924 "name": "BaseBdev3", 00:22:06.924 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:06.924 "is_configured": true, 00:22:06.924 "data_offset": 2048, 00:22:06.924 "data_size": 63488 00:22:06.924 }, 00:22:06.924 { 00:22:06.924 "name": "BaseBdev4", 00:22:06.924 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:06.924 "is_configured": true, 00:22:06.924 "data_offset": 2048, 00:22:06.924 "data_size": 63488 00:22:06.924 } 00:22:06.924 ] 00:22:06.924 }' 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.924 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.183 [2024-11-04 14:54:36.858781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:07.183 [2024-11-04 14:54:36.889673] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:07.183 [2024-11-04 14:54:36.889800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.183 [2024-11-04 14:54:36.889829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:07.183 [2024-11-04 14:54:36.889844] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.183 "name": "raid_bdev1", 00:22:07.183 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:07.183 "strip_size_kb": 0, 00:22:07.183 "state": "online", 00:22:07.183 "raid_level": "raid1", 00:22:07.183 "superblock": true, 00:22:07.183 "num_base_bdevs": 4, 00:22:07.183 "num_base_bdevs_discovered": 2, 00:22:07.183 "num_base_bdevs_operational": 2, 00:22:07.183 "base_bdevs_list": [ 00:22:07.183 { 00:22:07.183 "name": null, 00:22:07.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.183 "is_configured": false, 00:22:07.183 "data_offset": 0, 00:22:07.183 "data_size": 63488 00:22:07.183 }, 00:22:07.183 { 00:22:07.183 "name": null, 00:22:07.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.183 "is_configured": false, 00:22:07.183 "data_offset": 2048, 00:22:07.183 "data_size": 63488 00:22:07.183 }, 00:22:07.183 { 00:22:07.183 "name": "BaseBdev3", 00:22:07.183 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:07.183 "is_configured": true, 00:22:07.183 "data_offset": 2048, 00:22:07.183 "data_size": 63488 00:22:07.183 }, 00:22:07.183 { 00:22:07.183 "name": "BaseBdev4", 00:22:07.183 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:07.183 "is_configured": true, 00:22:07.183 "data_offset": 2048, 00:22:07.183 "data_size": 63488 00:22:07.183 } 00:22:07.183 ] 00:22:07.183 }' 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.183 14:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.749 "name": "raid_bdev1", 00:22:07.749 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:07.749 "strip_size_kb": 0, 00:22:07.749 "state": "online", 00:22:07.749 "raid_level": "raid1", 00:22:07.749 "superblock": true, 00:22:07.749 "num_base_bdevs": 4, 00:22:07.749 "num_base_bdevs_discovered": 2, 00:22:07.749 "num_base_bdevs_operational": 2, 00:22:07.749 "base_bdevs_list": [ 00:22:07.749 { 00:22:07.749 "name": null, 00:22:07.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.749 "is_configured": false, 00:22:07.749 "data_offset": 0, 00:22:07.749 "data_size": 63488 00:22:07.749 }, 00:22:07.749 { 00:22:07.749 "name": null, 00:22:07.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.749 "is_configured": false, 00:22:07.749 "data_offset": 2048, 00:22:07.749 "data_size": 63488 00:22:07.749 }, 00:22:07.749 { 00:22:07.749 "name": "BaseBdev3", 00:22:07.749 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:07.749 "is_configured": true, 00:22:07.749 "data_offset": 2048, 00:22:07.749 "data_size": 63488 00:22:07.749 }, 00:22:07.749 { 00:22:07.749 "name": "BaseBdev4", 00:22:07.749 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:07.749 "is_configured": true, 00:22:07.749 "data_offset": 2048, 00:22:07.749 "data_size": 63488 00:22:07.749 } 00:22:07.749 ] 00:22:07.749 }' 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:07.749 [2024-11-04 14:54:37.600488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:07.749 [2024-11-04 14:54:37.600598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.749 [2024-11-04 14:54:37.600637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:22:07.749 [2024-11-04 14:54:37.600661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.749 [2024-11-04 14:54:37.601433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.749 [2024-11-04 14:54:37.601490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:07.749 [2024-11-04 14:54:37.601635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:07.749 [2024-11-04 14:54:37.601671] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:22:07.749 [2024-11-04 14:54:37.601686] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:07.749 [2024-11-04 14:54:37.601707] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:07.749 BaseBdev1 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.749 14:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.122 "name": "raid_bdev1", 00:22:09.122 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:09.122 "strip_size_kb": 0, 00:22:09.122 "state": "online", 00:22:09.122 "raid_level": "raid1", 00:22:09.122 "superblock": true, 00:22:09.122 "num_base_bdevs": 4, 00:22:09.122 "num_base_bdevs_discovered": 2, 00:22:09.122 "num_base_bdevs_operational": 2, 00:22:09.122 "base_bdevs_list": [ 00:22:09.122 { 00:22:09.122 "name": null, 00:22:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.122 "is_configured": false, 00:22:09.122 "data_offset": 0, 00:22:09.122 "data_size": 63488 00:22:09.122 }, 00:22:09.122 { 00:22:09.122 "name": null, 00:22:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.122 "is_configured": false, 00:22:09.122 "data_offset": 2048, 00:22:09.122 "data_size": 63488 00:22:09.122 }, 00:22:09.122 { 00:22:09.122 "name": "BaseBdev3", 00:22:09.122 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:09.122 "is_configured": true, 00:22:09.122 "data_offset": 2048, 00:22:09.122 "data_size": 63488 00:22:09.122 }, 00:22:09.122 { 00:22:09.122 "name": "BaseBdev4", 00:22:09.122 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:09.122 "is_configured": true, 00:22:09.122 "data_offset": 2048, 00:22:09.122 "data_size": 63488 00:22:09.122 } 00:22:09.122 ] 00:22:09.122 }' 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.122 14:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.380 "name": "raid_bdev1", 00:22:09.380 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:09.380 "strip_size_kb": 0, 00:22:09.380 "state": "online", 00:22:09.380 "raid_level": "raid1", 00:22:09.380 "superblock": true, 00:22:09.380 "num_base_bdevs": 4, 00:22:09.380 "num_base_bdevs_discovered": 2, 00:22:09.380 "num_base_bdevs_operational": 2, 00:22:09.380 "base_bdevs_list": [ 00:22:09.380 { 00:22:09.380 "name": null, 00:22:09.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.380 "is_configured": false, 00:22:09.380 "data_offset": 0, 00:22:09.380 "data_size": 63488 00:22:09.380 }, 00:22:09.380 { 00:22:09.380 "name": null, 00:22:09.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.380 "is_configured": false, 00:22:09.380 "data_offset": 2048, 00:22:09.380 "data_size": 63488 00:22:09.380 }, 00:22:09.380 { 00:22:09.380 "name": "BaseBdev3", 00:22:09.380 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:09.380 "is_configured": true, 00:22:09.380 "data_offset": 2048, 00:22:09.380 "data_size": 63488 00:22:09.380 }, 00:22:09.380 { 00:22:09.380 "name": "BaseBdev4", 00:22:09.380 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:09.380 "is_configured": true, 00:22:09.380 "data_offset": 2048, 00:22:09.380 "data_size": 63488 00:22:09.380 } 00:22:09.380 ] 00:22:09.380 }' 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:09.380 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:09.638 [2024-11-04 14:54:39.317573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.638 [2024-11-04 14:54:39.317870] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:22:09.638 [2024-11-04 14:54:39.317893] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:09.638 request: 00:22:09.638 { 00:22:09.638 "base_bdev": "BaseBdev1", 00:22:09.638 "raid_bdev": "raid_bdev1", 00:22:09.638 "method": "bdev_raid_add_base_bdev", 00:22:09.638 "req_id": 1 00:22:09.638 } 00:22:09.638 Got JSON-RPC error response 00:22:09.638 response: 00:22:09.638 { 00:22:09.638 "code": -22, 00:22:09.638 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:09.638 } 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.638 14:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.573 "name": "raid_bdev1", 00:22:10.573 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:10.573 "strip_size_kb": 0, 00:22:10.573 "state": "online", 00:22:10.573 "raid_level": "raid1", 00:22:10.573 "superblock": true, 00:22:10.573 "num_base_bdevs": 4, 00:22:10.573 "num_base_bdevs_discovered": 2, 00:22:10.573 "num_base_bdevs_operational": 2, 00:22:10.573 "base_bdevs_list": [ 00:22:10.573 { 00:22:10.573 "name": null, 00:22:10.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.573 "is_configured": false, 00:22:10.573 "data_offset": 0, 00:22:10.573 "data_size": 63488 00:22:10.573 }, 00:22:10.573 { 00:22:10.573 "name": null, 00:22:10.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.573 "is_configured": false, 00:22:10.573 "data_offset": 2048, 00:22:10.573 "data_size": 63488 00:22:10.573 }, 00:22:10.573 { 00:22:10.573 "name": "BaseBdev3", 00:22:10.573 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:10.573 "is_configured": true, 00:22:10.573 "data_offset": 2048, 00:22:10.573 "data_size": 63488 00:22:10.573 }, 00:22:10.573 { 00:22:10.573 "name": "BaseBdev4", 00:22:10.573 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:10.573 "is_configured": true, 00:22:10.573 "data_offset": 2048, 00:22:10.573 "data_size": 63488 00:22:10.573 } 00:22:10.573 ] 00:22:10.573 }' 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.573 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.140 "name": "raid_bdev1", 00:22:11.140 "uuid": "d2a4cb02-0844-4f05-9446-9556ba9bc5ae", 00:22:11.140 "strip_size_kb": 0, 00:22:11.140 "state": "online", 00:22:11.140 "raid_level": "raid1", 00:22:11.140 "superblock": true, 00:22:11.140 "num_base_bdevs": 4, 00:22:11.140 "num_base_bdevs_discovered": 2, 00:22:11.140 "num_base_bdevs_operational": 2, 00:22:11.140 "base_bdevs_list": [ 00:22:11.140 { 00:22:11.140 "name": null, 00:22:11.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.140 "is_configured": false, 00:22:11.140 "data_offset": 0, 00:22:11.140 "data_size": 63488 00:22:11.140 }, 00:22:11.140 { 00:22:11.140 "name": null, 00:22:11.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.140 "is_configured": false, 00:22:11.140 "data_offset": 2048, 00:22:11.140 "data_size": 63488 00:22:11.140 }, 00:22:11.140 { 00:22:11.140 "name": "BaseBdev3", 00:22:11.140 "uuid": "4aeaa801-38ca-5fef-9260-a043cc672240", 00:22:11.140 "is_configured": true, 00:22:11.140 "data_offset": 2048, 00:22:11.140 "data_size": 63488 00:22:11.140 }, 00:22:11.140 { 00:22:11.140 "name": "BaseBdev4", 00:22:11.140 "uuid": "12e86b11-c518-5c9c-b05e-d4787524fe61", 00:22:11.140 "is_configured": true, 00:22:11.140 "data_offset": 2048, 00:22:11.140 "data_size": 63488 00:22:11.140 } 00:22:11.140 ] 00:22:11.140 }' 00:22:11.140 14:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.140 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:11.140 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79630 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79630 ']' 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79630 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79630 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:11.404 killing process with pid 79630 00:22:11.404 Received shutdown signal, test time was about 19.713778 seconds 00:22:11.404 00:22:11.404 Latency(us) 00:22:11.404 [2024-11-04T14:54:41.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.404 [2024-11-04T14:54:41.296Z] =================================================================================================================== 00:22:11.404 [2024-11-04T14:54:41.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79630' 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79630 00:22:11.404 [2024-11-04 14:54:41.099121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.404 14:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79630 00:22:11.404 [2024-11-04 14:54:41.099334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.404 [2024-11-04 14:54:41.099471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.404 [2024-11-04 14:54:41.099490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:11.663 [2024-11-04 14:54:41.479767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.037 14:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:22:13.037 00:22:13.037 real 0m23.332s 00:22:13.037 user 0m31.851s 00:22:13.037 sys 0m2.462s 00:22:13.037 14:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:13.037 14:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:13.037 ************************************ 00:22:13.037 END TEST raid_rebuild_test_sb_io 00:22:13.037 ************************************ 00:22:13.037 14:54:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:22:13.037 14:54:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:13.037 14:54:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:13.037 14:54:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:13.038 14:54:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:13.038 ************************************ 00:22:13.038 START TEST raid5f_state_function_test 00:22:13.038 ************************************ 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:13.038 Process raid pid: 80370 00:22:13.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80370 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80370' 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80370 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80370 ']' 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:13.038 14:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.038 [2024-11-04 14:54:42.827647] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:22:13.038 [2024-11-04 14:54:42.828213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.296 [2024-11-04 14:54:43.018005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.553 [2024-11-04 14:54:43.191389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.811 [2024-11-04 14:54:43.458119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.811 [2024-11-04 14:54:43.458176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.069 [2024-11-04 14:54:43.821050] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:14.069 [2024-11-04 14:54:43.821149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:14.069 [2024-11-04 14:54:43.821168] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:14.069 [2024-11-04 14:54:43.821185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:14.069 [2024-11-04 14:54:43.821195] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:14.069 [2024-11-04 14:54:43.821210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.069 "name": "Existed_Raid", 00:22:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.069 "strip_size_kb": 64, 00:22:14.069 "state": "configuring", 00:22:14.069 "raid_level": "raid5f", 00:22:14.069 "superblock": false, 00:22:14.069 "num_base_bdevs": 3, 00:22:14.069 "num_base_bdevs_discovered": 0, 00:22:14.069 "num_base_bdevs_operational": 3, 00:22:14.069 "base_bdevs_list": [ 00:22:14.069 { 00:22:14.069 "name": "BaseBdev1", 00:22:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.069 "is_configured": false, 00:22:14.069 "data_offset": 0, 00:22:14.069 "data_size": 0 00:22:14.069 }, 00:22:14.069 { 00:22:14.069 "name": "BaseBdev2", 00:22:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.069 "is_configured": false, 00:22:14.069 "data_offset": 0, 00:22:14.069 "data_size": 0 00:22:14.069 }, 00:22:14.069 { 00:22:14.069 "name": "BaseBdev3", 00:22:14.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.069 "is_configured": false, 00:22:14.069 "data_offset": 0, 00:22:14.069 "data_size": 0 00:22:14.069 } 00:22:14.069 ] 00:22:14.069 }' 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.069 14:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 [2024-11-04 14:54:44.353120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.637 [2024-11-04 14:54:44.353169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 [2024-11-04 14:54:44.361102] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:14.637 [2024-11-04 14:54:44.361160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:14.637 [2024-11-04 14:54:44.361186] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:14.637 [2024-11-04 14:54:44.361202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:14.637 [2024-11-04 14:54:44.361212] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:14.637 [2024-11-04 14:54:44.361239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 [2024-11-04 14:54:44.406484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:14.637 BaseBdev1 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.637 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 [ 00:22:14.637 { 00:22:14.637 "name": "BaseBdev1", 00:22:14.637 "aliases": [ 00:22:14.637 "4729a792-4808-4b1d-858a-a85739d04281" 00:22:14.637 ], 00:22:14.637 "product_name": "Malloc disk", 00:22:14.637 "block_size": 512, 00:22:14.637 "num_blocks": 65536, 00:22:14.637 "uuid": "4729a792-4808-4b1d-858a-a85739d04281", 00:22:14.637 "assigned_rate_limits": { 00:22:14.637 "rw_ios_per_sec": 0, 00:22:14.637 "rw_mbytes_per_sec": 0, 00:22:14.637 "r_mbytes_per_sec": 0, 00:22:14.637 "w_mbytes_per_sec": 0 00:22:14.637 }, 00:22:14.637 "claimed": true, 00:22:14.637 "claim_type": "exclusive_write", 00:22:14.637 "zoned": false, 00:22:14.638 "supported_io_types": { 00:22:14.638 "read": true, 00:22:14.638 "write": true, 00:22:14.638 "unmap": true, 00:22:14.638 "flush": true, 00:22:14.638 "reset": true, 00:22:14.638 "nvme_admin": false, 00:22:14.638 "nvme_io": false, 00:22:14.638 "nvme_io_md": false, 00:22:14.638 "write_zeroes": true, 00:22:14.638 "zcopy": true, 00:22:14.638 "get_zone_info": false, 00:22:14.638 "zone_management": false, 00:22:14.638 "zone_append": false, 00:22:14.638 "compare": false, 00:22:14.638 "compare_and_write": false, 00:22:14.638 "abort": true, 00:22:14.638 "seek_hole": false, 00:22:14.638 "seek_data": false, 00:22:14.638 "copy": true, 00:22:14.638 "nvme_iov_md": false 00:22:14.638 }, 00:22:14.638 "memory_domains": [ 00:22:14.638 { 00:22:14.638 "dma_device_id": "system", 00:22:14.638 "dma_device_type": 1 00:22:14.638 }, 00:22:14.638 { 00:22:14.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.638 "dma_device_type": 2 00:22:14.638 } 00:22:14.638 ], 00:22:14.638 "driver_specific": {} 00:22:14.638 } 00:22:14.638 ] 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.638 "name": "Existed_Raid", 00:22:14.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.638 "strip_size_kb": 64, 00:22:14.638 "state": "configuring", 00:22:14.638 "raid_level": "raid5f", 00:22:14.638 "superblock": false, 00:22:14.638 "num_base_bdevs": 3, 00:22:14.638 "num_base_bdevs_discovered": 1, 00:22:14.638 "num_base_bdevs_operational": 3, 00:22:14.638 "base_bdevs_list": [ 00:22:14.638 { 00:22:14.638 "name": "BaseBdev1", 00:22:14.638 "uuid": "4729a792-4808-4b1d-858a-a85739d04281", 00:22:14.638 "is_configured": true, 00:22:14.638 "data_offset": 0, 00:22:14.638 "data_size": 65536 00:22:14.638 }, 00:22:14.638 { 00:22:14.638 "name": "BaseBdev2", 00:22:14.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.638 "is_configured": false, 00:22:14.638 "data_offset": 0, 00:22:14.638 "data_size": 0 00:22:14.638 }, 00:22:14.638 { 00:22:14.638 "name": "BaseBdev3", 00:22:14.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.638 "is_configured": false, 00:22:14.638 "data_offset": 0, 00:22:14.638 "data_size": 0 00:22:14.638 } 00:22:14.638 ] 00:22:14.638 }' 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.638 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.205 [2024-11-04 14:54:44.950726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.205 [2024-11-04 14:54:44.950810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.205 [2024-11-04 14:54:44.958738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:15.205 [2024-11-04 14:54:44.961170] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.205 [2024-11-04 14:54:44.961245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.205 [2024-11-04 14:54:44.961264] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:15.205 [2024-11-04 14:54:44.961291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.205 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.206 14:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.206 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.206 "name": "Existed_Raid", 00:22:15.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.206 "strip_size_kb": 64, 00:22:15.206 "state": "configuring", 00:22:15.206 "raid_level": "raid5f", 00:22:15.206 "superblock": false, 00:22:15.206 "num_base_bdevs": 3, 00:22:15.206 "num_base_bdevs_discovered": 1, 00:22:15.206 "num_base_bdevs_operational": 3, 00:22:15.206 "base_bdevs_list": [ 00:22:15.206 { 00:22:15.206 "name": "BaseBdev1", 00:22:15.206 "uuid": "4729a792-4808-4b1d-858a-a85739d04281", 00:22:15.206 "is_configured": true, 00:22:15.206 "data_offset": 0, 00:22:15.206 "data_size": 65536 00:22:15.206 }, 00:22:15.206 { 00:22:15.206 "name": "BaseBdev2", 00:22:15.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.206 "is_configured": false, 00:22:15.206 "data_offset": 0, 00:22:15.206 "data_size": 0 00:22:15.206 }, 00:22:15.206 { 00:22:15.206 "name": "BaseBdev3", 00:22:15.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.206 "is_configured": false, 00:22:15.206 "data_offset": 0, 00:22:15.206 "data_size": 0 00:22:15.206 } 00:22:15.206 ] 00:22:15.206 }' 00:22:15.206 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.206 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.771 [2024-11-04 14:54:45.525495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:15.771 BaseBdev2 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.771 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.771 [ 00:22:15.771 { 00:22:15.771 "name": "BaseBdev2", 00:22:15.771 "aliases": [ 00:22:15.771 "bdb0ebe8-e941-443d-a124-0bba9dee7ca2" 00:22:15.771 ], 00:22:15.771 "product_name": "Malloc disk", 00:22:15.771 "block_size": 512, 00:22:15.771 "num_blocks": 65536, 00:22:15.771 "uuid": "bdb0ebe8-e941-443d-a124-0bba9dee7ca2", 00:22:15.771 "assigned_rate_limits": { 00:22:15.771 "rw_ios_per_sec": 0, 00:22:15.771 "rw_mbytes_per_sec": 0, 00:22:15.771 "r_mbytes_per_sec": 0, 00:22:15.771 "w_mbytes_per_sec": 0 00:22:15.771 }, 00:22:15.771 "claimed": true, 00:22:15.771 "claim_type": "exclusive_write", 00:22:15.771 "zoned": false, 00:22:15.771 "supported_io_types": { 00:22:15.771 "read": true, 00:22:15.771 "write": true, 00:22:15.771 "unmap": true, 00:22:15.771 "flush": true, 00:22:15.771 "reset": true, 00:22:15.771 "nvme_admin": false, 00:22:15.771 "nvme_io": false, 00:22:15.771 "nvme_io_md": false, 00:22:15.771 "write_zeroes": true, 00:22:15.771 "zcopy": true, 00:22:15.771 "get_zone_info": false, 00:22:15.771 "zone_management": false, 00:22:15.771 "zone_append": false, 00:22:15.771 "compare": false, 00:22:15.771 "compare_and_write": false, 00:22:15.771 "abort": true, 00:22:15.771 "seek_hole": false, 00:22:15.771 "seek_data": false, 00:22:15.771 "copy": true, 00:22:15.771 "nvme_iov_md": false 00:22:15.771 }, 00:22:15.771 "memory_domains": [ 00:22:15.771 { 00:22:15.772 "dma_device_id": "system", 00:22:15.772 "dma_device_type": 1 00:22:15.772 }, 00:22:15.772 { 00:22:15.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.772 "dma_device_type": 2 00:22:15.772 } 00:22:15.772 ], 00:22:15.772 "driver_specific": {} 00:22:15.772 } 00:22:15.772 ] 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.772 "name": "Existed_Raid", 00:22:15.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.772 "strip_size_kb": 64, 00:22:15.772 "state": "configuring", 00:22:15.772 "raid_level": "raid5f", 00:22:15.772 "superblock": false, 00:22:15.772 "num_base_bdevs": 3, 00:22:15.772 "num_base_bdevs_discovered": 2, 00:22:15.772 "num_base_bdevs_operational": 3, 00:22:15.772 "base_bdevs_list": [ 00:22:15.772 { 00:22:15.772 "name": "BaseBdev1", 00:22:15.772 "uuid": "4729a792-4808-4b1d-858a-a85739d04281", 00:22:15.772 "is_configured": true, 00:22:15.772 "data_offset": 0, 00:22:15.772 "data_size": 65536 00:22:15.772 }, 00:22:15.772 { 00:22:15.772 "name": "BaseBdev2", 00:22:15.772 "uuid": "bdb0ebe8-e941-443d-a124-0bba9dee7ca2", 00:22:15.772 "is_configured": true, 00:22:15.772 "data_offset": 0, 00:22:15.772 "data_size": 65536 00:22:15.772 }, 00:22:15.772 { 00:22:15.772 "name": "BaseBdev3", 00:22:15.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.772 "is_configured": false, 00:22:15.772 "data_offset": 0, 00:22:15.772 "data_size": 0 00:22:15.772 } 00:22:15.772 ] 00:22:15.772 }' 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.772 14:54:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.337 [2024-11-04 14:54:46.114733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:16.337 [2024-11-04 14:54:46.115067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:16.337 [2024-11-04 14:54:46.115102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:16.337 [2024-11-04 14:54:46.115502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:16.337 [2024-11-04 14:54:46.121655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:16.337 [2024-11-04 14:54:46.121801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:16.337 [2024-11-04 14:54:46.122426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.337 BaseBdev3 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.337 [ 00:22:16.337 { 00:22:16.337 "name": "BaseBdev3", 00:22:16.337 "aliases": [ 00:22:16.337 "e0f286b4-16d1-4462-a842-ff8fea8b5c07" 00:22:16.337 ], 00:22:16.337 "product_name": "Malloc disk", 00:22:16.337 "block_size": 512, 00:22:16.337 "num_blocks": 65536, 00:22:16.337 "uuid": "e0f286b4-16d1-4462-a842-ff8fea8b5c07", 00:22:16.337 "assigned_rate_limits": { 00:22:16.337 "rw_ios_per_sec": 0, 00:22:16.337 "rw_mbytes_per_sec": 0, 00:22:16.337 "r_mbytes_per_sec": 0, 00:22:16.337 "w_mbytes_per_sec": 0 00:22:16.337 }, 00:22:16.337 "claimed": true, 00:22:16.337 "claim_type": "exclusive_write", 00:22:16.337 "zoned": false, 00:22:16.337 "supported_io_types": { 00:22:16.337 "read": true, 00:22:16.337 "write": true, 00:22:16.337 "unmap": true, 00:22:16.337 "flush": true, 00:22:16.337 "reset": true, 00:22:16.337 "nvme_admin": false, 00:22:16.337 "nvme_io": false, 00:22:16.337 "nvme_io_md": false, 00:22:16.337 "write_zeroes": true, 00:22:16.337 "zcopy": true, 00:22:16.337 "get_zone_info": false, 00:22:16.337 "zone_management": false, 00:22:16.337 "zone_append": false, 00:22:16.337 "compare": false, 00:22:16.337 "compare_and_write": false, 00:22:16.337 "abort": true, 00:22:16.337 "seek_hole": false, 00:22:16.337 "seek_data": false, 00:22:16.337 "copy": true, 00:22:16.337 "nvme_iov_md": false 00:22:16.337 }, 00:22:16.337 "memory_domains": [ 00:22:16.337 { 00:22:16.337 "dma_device_id": "system", 00:22:16.337 "dma_device_type": 1 00:22:16.337 }, 00:22:16.337 { 00:22:16.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.337 "dma_device_type": 2 00:22:16.337 } 00:22:16.337 ], 00:22:16.337 "driver_specific": {} 00:22:16.337 } 00:22:16.337 ] 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.337 "name": "Existed_Raid", 00:22:16.337 "uuid": "7cdb71f4-07be-48ac-8d8b-4f5e0b25248e", 00:22:16.337 "strip_size_kb": 64, 00:22:16.337 "state": "online", 00:22:16.337 "raid_level": "raid5f", 00:22:16.337 "superblock": false, 00:22:16.337 "num_base_bdevs": 3, 00:22:16.337 "num_base_bdevs_discovered": 3, 00:22:16.337 "num_base_bdevs_operational": 3, 00:22:16.337 "base_bdevs_list": [ 00:22:16.337 { 00:22:16.337 "name": "BaseBdev1", 00:22:16.337 "uuid": "4729a792-4808-4b1d-858a-a85739d04281", 00:22:16.337 "is_configured": true, 00:22:16.337 "data_offset": 0, 00:22:16.337 "data_size": 65536 00:22:16.337 }, 00:22:16.337 { 00:22:16.337 "name": "BaseBdev2", 00:22:16.337 "uuid": "bdb0ebe8-e941-443d-a124-0bba9dee7ca2", 00:22:16.337 "is_configured": true, 00:22:16.337 "data_offset": 0, 00:22:16.337 "data_size": 65536 00:22:16.337 }, 00:22:16.337 { 00:22:16.337 "name": "BaseBdev3", 00:22:16.337 "uuid": "e0f286b4-16d1-4462-a842-ff8fea8b5c07", 00:22:16.337 "is_configured": true, 00:22:16.337 "data_offset": 0, 00:22:16.337 "data_size": 65536 00:22:16.337 } 00:22:16.337 ] 00:22:16.337 }' 00:22:16.337 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.595 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.853 [2024-11-04 14:54:46.697207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.853 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:16.853 "name": "Existed_Raid", 00:22:16.853 "aliases": [ 00:22:16.853 "7cdb71f4-07be-48ac-8d8b-4f5e0b25248e" 00:22:16.853 ], 00:22:16.853 "product_name": "Raid Volume", 00:22:16.853 "block_size": 512, 00:22:16.853 "num_blocks": 131072, 00:22:16.853 "uuid": "7cdb71f4-07be-48ac-8d8b-4f5e0b25248e", 00:22:16.853 "assigned_rate_limits": { 00:22:16.853 "rw_ios_per_sec": 0, 00:22:16.853 "rw_mbytes_per_sec": 0, 00:22:16.853 "r_mbytes_per_sec": 0, 00:22:16.853 "w_mbytes_per_sec": 0 00:22:16.853 }, 00:22:16.853 "claimed": false, 00:22:16.853 "zoned": false, 00:22:16.853 "supported_io_types": { 00:22:16.853 "read": true, 00:22:16.853 "write": true, 00:22:16.853 "unmap": false, 00:22:16.853 "flush": false, 00:22:16.853 "reset": true, 00:22:16.853 "nvme_admin": false, 00:22:16.853 "nvme_io": false, 00:22:16.853 "nvme_io_md": false, 00:22:16.853 "write_zeroes": true, 00:22:16.853 "zcopy": false, 00:22:16.853 "get_zone_info": false, 00:22:16.853 "zone_management": false, 00:22:16.853 "zone_append": false, 00:22:16.853 "compare": false, 00:22:16.853 "compare_and_write": false, 00:22:16.853 "abort": false, 00:22:16.853 "seek_hole": false, 00:22:16.853 "seek_data": false, 00:22:16.853 "copy": false, 00:22:16.853 "nvme_iov_md": false 00:22:16.853 }, 00:22:16.853 "driver_specific": { 00:22:16.853 "raid": { 00:22:16.853 "uuid": "7cdb71f4-07be-48ac-8d8b-4f5e0b25248e", 00:22:16.853 "strip_size_kb": 64, 00:22:16.853 "state": "online", 00:22:16.853 "raid_level": "raid5f", 00:22:16.853 "superblock": false, 00:22:16.853 "num_base_bdevs": 3, 00:22:16.853 "num_base_bdevs_discovered": 3, 00:22:16.853 "num_base_bdevs_operational": 3, 00:22:16.853 "base_bdevs_list": [ 00:22:16.853 { 00:22:16.853 "name": "BaseBdev1", 00:22:16.853 "uuid": "4729a792-4808-4b1d-858a-a85739d04281", 00:22:16.853 "is_configured": true, 00:22:16.853 "data_offset": 0, 00:22:16.853 "data_size": 65536 00:22:16.853 }, 00:22:16.853 { 00:22:16.853 "name": "BaseBdev2", 00:22:16.853 "uuid": "bdb0ebe8-e941-443d-a124-0bba9dee7ca2", 00:22:16.853 "is_configured": true, 00:22:16.854 "data_offset": 0, 00:22:16.854 "data_size": 65536 00:22:16.854 }, 00:22:16.854 { 00:22:16.854 "name": "BaseBdev3", 00:22:16.854 "uuid": "e0f286b4-16d1-4462-a842-ff8fea8b5c07", 00:22:16.854 "is_configured": true, 00:22:16.854 "data_offset": 0, 00:22:16.854 "data_size": 65536 00:22:16.854 } 00:22:16.854 ] 00:22:16.854 } 00:22:16.854 } 00:22:16.854 }' 00:22:16.854 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:17.111 BaseBdev2 00:22:17.111 BaseBdev3' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.111 14:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.370 [2024-11-04 14:54:47.021116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.370 "name": "Existed_Raid", 00:22:17.370 "uuid": "7cdb71f4-07be-48ac-8d8b-4f5e0b25248e", 00:22:17.370 "strip_size_kb": 64, 00:22:17.370 "state": "online", 00:22:17.370 "raid_level": "raid5f", 00:22:17.370 "superblock": false, 00:22:17.370 "num_base_bdevs": 3, 00:22:17.370 "num_base_bdevs_discovered": 2, 00:22:17.370 "num_base_bdevs_operational": 2, 00:22:17.370 "base_bdevs_list": [ 00:22:17.370 { 00:22:17.370 "name": null, 00:22:17.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.370 "is_configured": false, 00:22:17.370 "data_offset": 0, 00:22:17.370 "data_size": 65536 00:22:17.370 }, 00:22:17.370 { 00:22:17.370 "name": "BaseBdev2", 00:22:17.370 "uuid": "bdb0ebe8-e941-443d-a124-0bba9dee7ca2", 00:22:17.370 "is_configured": true, 00:22:17.370 "data_offset": 0, 00:22:17.370 "data_size": 65536 00:22:17.370 }, 00:22:17.370 { 00:22:17.370 "name": "BaseBdev3", 00:22:17.370 "uuid": "e0f286b4-16d1-4462-a842-ff8fea8b5c07", 00:22:17.370 "is_configured": true, 00:22:17.370 "data_offset": 0, 00:22:17.370 "data_size": 65536 00:22:17.370 } 00:22:17.370 ] 00:22:17.370 }' 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.370 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.968 [2024-11-04 14:54:47.683935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:17.968 [2024-11-04 14:54:47.684080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.968 [2024-11-04 14:54:47.769020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.968 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.968 [2024-11-04 14:54:47.833089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:17.968 [2024-11-04 14:54:47.833161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.227 14:54:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.227 BaseBdev2 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.227 [ 00:22:18.227 { 00:22:18.227 "name": "BaseBdev2", 00:22:18.227 "aliases": [ 00:22:18.227 "f877e75c-c929-4ce3-8a4d-2b3609042dc0" 00:22:18.227 ], 00:22:18.227 "product_name": "Malloc disk", 00:22:18.227 "block_size": 512, 00:22:18.227 "num_blocks": 65536, 00:22:18.227 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:18.227 "assigned_rate_limits": { 00:22:18.227 "rw_ios_per_sec": 0, 00:22:18.227 "rw_mbytes_per_sec": 0, 00:22:18.227 "r_mbytes_per_sec": 0, 00:22:18.227 "w_mbytes_per_sec": 0 00:22:18.227 }, 00:22:18.227 "claimed": false, 00:22:18.227 "zoned": false, 00:22:18.227 "supported_io_types": { 00:22:18.227 "read": true, 00:22:18.227 "write": true, 00:22:18.227 "unmap": true, 00:22:18.227 "flush": true, 00:22:18.227 "reset": true, 00:22:18.227 "nvme_admin": false, 00:22:18.227 "nvme_io": false, 00:22:18.227 "nvme_io_md": false, 00:22:18.227 "write_zeroes": true, 00:22:18.227 "zcopy": true, 00:22:18.227 "get_zone_info": false, 00:22:18.227 "zone_management": false, 00:22:18.227 "zone_append": false, 00:22:18.227 "compare": false, 00:22:18.227 "compare_and_write": false, 00:22:18.227 "abort": true, 00:22:18.227 "seek_hole": false, 00:22:18.227 "seek_data": false, 00:22:18.227 "copy": true, 00:22:18.227 "nvme_iov_md": false 00:22:18.227 }, 00:22:18.227 "memory_domains": [ 00:22:18.227 { 00:22:18.227 "dma_device_id": "system", 00:22:18.227 "dma_device_type": 1 00:22:18.227 }, 00:22:18.227 { 00:22:18.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.227 "dma_device_type": 2 00:22:18.227 } 00:22:18.227 ], 00:22:18.227 "driver_specific": {} 00:22:18.227 } 00:22:18.227 ] 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:18.227 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.228 BaseBdev3 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.228 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.228 [ 00:22:18.228 { 00:22:18.228 "name": "BaseBdev3", 00:22:18.228 "aliases": [ 00:22:18.228 "687cbdd7-f76d-4e47-b052-9fdd1cacf209" 00:22:18.228 ], 00:22:18.228 "product_name": "Malloc disk", 00:22:18.228 "block_size": 512, 00:22:18.228 "num_blocks": 65536, 00:22:18.228 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:18.228 "assigned_rate_limits": { 00:22:18.228 "rw_ios_per_sec": 0, 00:22:18.228 "rw_mbytes_per_sec": 0, 00:22:18.228 "r_mbytes_per_sec": 0, 00:22:18.228 "w_mbytes_per_sec": 0 00:22:18.228 }, 00:22:18.228 "claimed": false, 00:22:18.228 "zoned": false, 00:22:18.228 "supported_io_types": { 00:22:18.228 "read": true, 00:22:18.228 "write": true, 00:22:18.486 "unmap": true, 00:22:18.486 "flush": true, 00:22:18.486 "reset": true, 00:22:18.486 "nvme_admin": false, 00:22:18.486 "nvme_io": false, 00:22:18.486 "nvme_io_md": false, 00:22:18.486 "write_zeroes": true, 00:22:18.486 "zcopy": true, 00:22:18.486 "get_zone_info": false, 00:22:18.486 "zone_management": false, 00:22:18.486 "zone_append": false, 00:22:18.486 "compare": false, 00:22:18.486 "compare_and_write": false, 00:22:18.486 "abort": true, 00:22:18.486 "seek_hole": false, 00:22:18.487 "seek_data": false, 00:22:18.487 "copy": true, 00:22:18.487 "nvme_iov_md": false 00:22:18.487 }, 00:22:18.487 "memory_domains": [ 00:22:18.487 { 00:22:18.487 "dma_device_id": "system", 00:22:18.487 "dma_device_type": 1 00:22:18.487 }, 00:22:18.487 { 00:22:18.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.487 "dma_device_type": 2 00:22:18.487 } 00:22:18.487 ], 00:22:18.487 "driver_specific": {} 00:22:18.487 } 00:22:18.487 ] 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.487 [2024-11-04 14:54:48.128011] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:18.487 [2024-11-04 14:54:48.128081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:18.487 [2024-11-04 14:54:48.128120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.487 [2024-11-04 14:54:48.130658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.487 "name": "Existed_Raid", 00:22:18.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.487 "strip_size_kb": 64, 00:22:18.487 "state": "configuring", 00:22:18.487 "raid_level": "raid5f", 00:22:18.487 "superblock": false, 00:22:18.487 "num_base_bdevs": 3, 00:22:18.487 "num_base_bdevs_discovered": 2, 00:22:18.487 "num_base_bdevs_operational": 3, 00:22:18.487 "base_bdevs_list": [ 00:22:18.487 { 00:22:18.487 "name": "BaseBdev1", 00:22:18.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.487 "is_configured": false, 00:22:18.487 "data_offset": 0, 00:22:18.487 "data_size": 0 00:22:18.487 }, 00:22:18.487 { 00:22:18.487 "name": "BaseBdev2", 00:22:18.487 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:18.487 "is_configured": true, 00:22:18.487 "data_offset": 0, 00:22:18.487 "data_size": 65536 00:22:18.487 }, 00:22:18.487 { 00:22:18.487 "name": "BaseBdev3", 00:22:18.487 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:18.487 "is_configured": true, 00:22:18.487 "data_offset": 0, 00:22:18.487 "data_size": 65536 00:22:18.487 } 00:22:18.487 ] 00:22:18.487 }' 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.487 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.745 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:18.745 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.745 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 [2024-11-04 14:54:48.640208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.003 "name": "Existed_Raid", 00:22:19.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.003 "strip_size_kb": 64, 00:22:19.003 "state": "configuring", 00:22:19.003 "raid_level": "raid5f", 00:22:19.003 "superblock": false, 00:22:19.003 "num_base_bdevs": 3, 00:22:19.003 "num_base_bdevs_discovered": 1, 00:22:19.003 "num_base_bdevs_operational": 3, 00:22:19.003 "base_bdevs_list": [ 00:22:19.003 { 00:22:19.003 "name": "BaseBdev1", 00:22:19.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.003 "is_configured": false, 00:22:19.003 "data_offset": 0, 00:22:19.003 "data_size": 0 00:22:19.003 }, 00:22:19.003 { 00:22:19.003 "name": null, 00:22:19.003 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:19.003 "is_configured": false, 00:22:19.003 "data_offset": 0, 00:22:19.003 "data_size": 65536 00:22:19.003 }, 00:22:19.003 { 00:22:19.003 "name": "BaseBdev3", 00:22:19.003 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:19.003 "is_configured": true, 00:22:19.003 "data_offset": 0, 00:22:19.003 "data_size": 65536 00:22:19.003 } 00:22:19.003 ] 00:22:19.003 }' 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.003 14:54:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.570 [2024-11-04 14:54:49.259339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:19.570 BaseBdev1 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.570 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.571 [ 00:22:19.571 { 00:22:19.571 "name": "BaseBdev1", 00:22:19.571 "aliases": [ 00:22:19.571 "bdee5bf8-7e65-4942-af84-a41390050968" 00:22:19.571 ], 00:22:19.571 "product_name": "Malloc disk", 00:22:19.571 "block_size": 512, 00:22:19.571 "num_blocks": 65536, 00:22:19.571 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:19.571 "assigned_rate_limits": { 00:22:19.571 "rw_ios_per_sec": 0, 00:22:19.571 "rw_mbytes_per_sec": 0, 00:22:19.571 "r_mbytes_per_sec": 0, 00:22:19.571 "w_mbytes_per_sec": 0 00:22:19.571 }, 00:22:19.571 "claimed": true, 00:22:19.571 "claim_type": "exclusive_write", 00:22:19.571 "zoned": false, 00:22:19.571 "supported_io_types": { 00:22:19.571 "read": true, 00:22:19.571 "write": true, 00:22:19.571 "unmap": true, 00:22:19.571 "flush": true, 00:22:19.571 "reset": true, 00:22:19.571 "nvme_admin": false, 00:22:19.571 "nvme_io": false, 00:22:19.571 "nvme_io_md": false, 00:22:19.571 "write_zeroes": true, 00:22:19.571 "zcopy": true, 00:22:19.571 "get_zone_info": false, 00:22:19.571 "zone_management": false, 00:22:19.571 "zone_append": false, 00:22:19.571 "compare": false, 00:22:19.571 "compare_and_write": false, 00:22:19.571 "abort": true, 00:22:19.571 "seek_hole": false, 00:22:19.571 "seek_data": false, 00:22:19.571 "copy": true, 00:22:19.571 "nvme_iov_md": false 00:22:19.571 }, 00:22:19.571 "memory_domains": [ 00:22:19.571 { 00:22:19.571 "dma_device_id": "system", 00:22:19.571 "dma_device_type": 1 00:22:19.571 }, 00:22:19.571 { 00:22:19.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.571 "dma_device_type": 2 00:22:19.571 } 00:22:19.571 ], 00:22:19.571 "driver_specific": {} 00:22:19.571 } 00:22:19.571 ] 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.571 "name": "Existed_Raid", 00:22:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.571 "strip_size_kb": 64, 00:22:19.571 "state": "configuring", 00:22:19.571 "raid_level": "raid5f", 00:22:19.571 "superblock": false, 00:22:19.571 "num_base_bdevs": 3, 00:22:19.571 "num_base_bdevs_discovered": 2, 00:22:19.571 "num_base_bdevs_operational": 3, 00:22:19.571 "base_bdevs_list": [ 00:22:19.571 { 00:22:19.571 "name": "BaseBdev1", 00:22:19.571 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:19.571 "is_configured": true, 00:22:19.571 "data_offset": 0, 00:22:19.571 "data_size": 65536 00:22:19.571 }, 00:22:19.571 { 00:22:19.571 "name": null, 00:22:19.571 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:19.571 "is_configured": false, 00:22:19.571 "data_offset": 0, 00:22:19.571 "data_size": 65536 00:22:19.571 }, 00:22:19.571 { 00:22:19.571 "name": "BaseBdev3", 00:22:19.571 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:19.571 "is_configured": true, 00:22:19.571 "data_offset": 0, 00:22:19.571 "data_size": 65536 00:22:19.571 } 00:22:19.571 ] 00:22:19.571 }' 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.571 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.138 [2024-11-04 14:54:49.867568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.138 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.139 "name": "Existed_Raid", 00:22:20.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.139 "strip_size_kb": 64, 00:22:20.139 "state": "configuring", 00:22:20.139 "raid_level": "raid5f", 00:22:20.139 "superblock": false, 00:22:20.139 "num_base_bdevs": 3, 00:22:20.139 "num_base_bdevs_discovered": 1, 00:22:20.139 "num_base_bdevs_operational": 3, 00:22:20.139 "base_bdevs_list": [ 00:22:20.139 { 00:22:20.139 "name": "BaseBdev1", 00:22:20.139 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:20.139 "is_configured": true, 00:22:20.139 "data_offset": 0, 00:22:20.139 "data_size": 65536 00:22:20.139 }, 00:22:20.139 { 00:22:20.139 "name": null, 00:22:20.139 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:20.139 "is_configured": false, 00:22:20.139 "data_offset": 0, 00:22:20.139 "data_size": 65536 00:22:20.139 }, 00:22:20.139 { 00:22:20.139 "name": null, 00:22:20.139 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:20.139 "is_configured": false, 00:22:20.139 "data_offset": 0, 00:22:20.139 "data_size": 65536 00:22:20.139 } 00:22:20.139 ] 00:22:20.139 }' 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.139 14:54:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.705 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.706 [2024-11-04 14:54:50.475898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.706 "name": "Existed_Raid", 00:22:20.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.706 "strip_size_kb": 64, 00:22:20.706 "state": "configuring", 00:22:20.706 "raid_level": "raid5f", 00:22:20.706 "superblock": false, 00:22:20.706 "num_base_bdevs": 3, 00:22:20.706 "num_base_bdevs_discovered": 2, 00:22:20.706 "num_base_bdevs_operational": 3, 00:22:20.706 "base_bdevs_list": [ 00:22:20.706 { 00:22:20.706 "name": "BaseBdev1", 00:22:20.706 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:20.706 "is_configured": true, 00:22:20.706 "data_offset": 0, 00:22:20.706 "data_size": 65536 00:22:20.706 }, 00:22:20.706 { 00:22:20.706 "name": null, 00:22:20.706 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:20.706 "is_configured": false, 00:22:20.706 "data_offset": 0, 00:22:20.706 "data_size": 65536 00:22:20.706 }, 00:22:20.706 { 00:22:20.706 "name": "BaseBdev3", 00:22:20.706 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:20.706 "is_configured": true, 00:22:20.706 "data_offset": 0, 00:22:20.706 "data_size": 65536 00:22:20.706 } 00:22:20.706 ] 00:22:20.706 }' 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.706 14:54:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.273 [2024-11-04 14:54:51.068079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:21.273 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.274 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.532 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.532 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.532 "name": "Existed_Raid", 00:22:21.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.532 "strip_size_kb": 64, 00:22:21.532 "state": "configuring", 00:22:21.532 "raid_level": "raid5f", 00:22:21.532 "superblock": false, 00:22:21.532 "num_base_bdevs": 3, 00:22:21.532 "num_base_bdevs_discovered": 1, 00:22:21.532 "num_base_bdevs_operational": 3, 00:22:21.532 "base_bdevs_list": [ 00:22:21.532 { 00:22:21.532 "name": null, 00:22:21.532 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:21.532 "is_configured": false, 00:22:21.532 "data_offset": 0, 00:22:21.532 "data_size": 65536 00:22:21.532 }, 00:22:21.532 { 00:22:21.532 "name": null, 00:22:21.532 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:21.532 "is_configured": false, 00:22:21.532 "data_offset": 0, 00:22:21.532 "data_size": 65536 00:22:21.532 }, 00:22:21.532 { 00:22:21.532 "name": "BaseBdev3", 00:22:21.532 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:21.532 "is_configured": true, 00:22:21.532 "data_offset": 0, 00:22:21.532 "data_size": 65536 00:22:21.532 } 00:22:21.532 ] 00:22:21.532 }' 00:22:21.532 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.532 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.789 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.789 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.789 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:21.789 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.789 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.047 [2024-11-04 14:54:51.720863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.047 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.047 "name": "Existed_Raid", 00:22:22.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.047 "strip_size_kb": 64, 00:22:22.047 "state": "configuring", 00:22:22.047 "raid_level": "raid5f", 00:22:22.047 "superblock": false, 00:22:22.047 "num_base_bdevs": 3, 00:22:22.047 "num_base_bdevs_discovered": 2, 00:22:22.047 "num_base_bdevs_operational": 3, 00:22:22.047 "base_bdevs_list": [ 00:22:22.047 { 00:22:22.047 "name": null, 00:22:22.047 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:22.047 "is_configured": false, 00:22:22.047 "data_offset": 0, 00:22:22.047 "data_size": 65536 00:22:22.047 }, 00:22:22.047 { 00:22:22.047 "name": "BaseBdev2", 00:22:22.047 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:22.047 "is_configured": true, 00:22:22.047 "data_offset": 0, 00:22:22.047 "data_size": 65536 00:22:22.047 }, 00:22:22.047 { 00:22:22.047 "name": "BaseBdev3", 00:22:22.047 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:22.048 "is_configured": true, 00:22:22.048 "data_offset": 0, 00:22:22.048 "data_size": 65536 00:22:22.048 } 00:22:22.048 ] 00:22:22.048 }' 00:22:22.048 14:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.048 14:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bdee5bf8-7e65-4942-af84-a41390050968 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 [2024-11-04 14:54:52.375796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:22.615 [2024-11-04 14:54:52.376107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:22.615 [2024-11-04 14:54:52.376139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:22.615 [2024-11-04 14:54:52.376496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:22.615 [2024-11-04 14:54:52.381539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:22.615 [2024-11-04 14:54:52.381715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:22.615 [2024-11-04 14:54:52.382202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.615 NewBaseBdev 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 [ 00:22:22.615 { 00:22:22.615 "name": "NewBaseBdev", 00:22:22.615 "aliases": [ 00:22:22.615 "bdee5bf8-7e65-4942-af84-a41390050968" 00:22:22.615 ], 00:22:22.615 "product_name": "Malloc disk", 00:22:22.615 "block_size": 512, 00:22:22.615 "num_blocks": 65536, 00:22:22.615 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:22.615 "assigned_rate_limits": { 00:22:22.615 "rw_ios_per_sec": 0, 00:22:22.615 "rw_mbytes_per_sec": 0, 00:22:22.615 "r_mbytes_per_sec": 0, 00:22:22.615 "w_mbytes_per_sec": 0 00:22:22.615 }, 00:22:22.615 "claimed": true, 00:22:22.615 "claim_type": "exclusive_write", 00:22:22.615 "zoned": false, 00:22:22.615 "supported_io_types": { 00:22:22.615 "read": true, 00:22:22.615 "write": true, 00:22:22.615 "unmap": true, 00:22:22.615 "flush": true, 00:22:22.615 "reset": true, 00:22:22.615 "nvme_admin": false, 00:22:22.615 "nvme_io": false, 00:22:22.615 "nvme_io_md": false, 00:22:22.615 "write_zeroes": true, 00:22:22.615 "zcopy": true, 00:22:22.615 "get_zone_info": false, 00:22:22.615 "zone_management": false, 00:22:22.615 "zone_append": false, 00:22:22.615 "compare": false, 00:22:22.615 "compare_and_write": false, 00:22:22.615 "abort": true, 00:22:22.615 "seek_hole": false, 00:22:22.615 "seek_data": false, 00:22:22.615 "copy": true, 00:22:22.615 "nvme_iov_md": false 00:22:22.615 }, 00:22:22.615 "memory_domains": [ 00:22:22.615 { 00:22:22.615 "dma_device_id": "system", 00:22:22.615 "dma_device_type": 1 00:22:22.615 }, 00:22:22.615 { 00:22:22.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.615 "dma_device_type": 2 00:22:22.615 } 00:22:22.615 ], 00:22:22.615 "driver_specific": {} 00:22:22.615 } 00:22:22.615 ] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.615 "name": "Existed_Raid", 00:22:22.615 "uuid": "0c0a07c0-191f-4d4b-9989-34441a3b3712", 00:22:22.615 "strip_size_kb": 64, 00:22:22.615 "state": "online", 00:22:22.615 "raid_level": "raid5f", 00:22:22.615 "superblock": false, 00:22:22.615 "num_base_bdevs": 3, 00:22:22.615 "num_base_bdevs_discovered": 3, 00:22:22.615 "num_base_bdevs_operational": 3, 00:22:22.615 "base_bdevs_list": [ 00:22:22.615 { 00:22:22.615 "name": "NewBaseBdev", 00:22:22.615 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:22.615 "is_configured": true, 00:22:22.615 "data_offset": 0, 00:22:22.615 "data_size": 65536 00:22:22.615 }, 00:22:22.615 { 00:22:22.615 "name": "BaseBdev2", 00:22:22.615 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:22.615 "is_configured": true, 00:22:22.615 "data_offset": 0, 00:22:22.615 "data_size": 65536 00:22:22.615 }, 00:22:22.615 { 00:22:22.615 "name": "BaseBdev3", 00:22:22.615 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:22.615 "is_configured": true, 00:22:22.615 "data_offset": 0, 00:22:22.615 "data_size": 65536 00:22:22.615 } 00:22:22.615 ] 00:22:22.615 }' 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.615 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:23.182 [2024-11-04 14:54:52.952376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.182 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:23.182 "name": "Existed_Raid", 00:22:23.182 "aliases": [ 00:22:23.182 "0c0a07c0-191f-4d4b-9989-34441a3b3712" 00:22:23.182 ], 00:22:23.182 "product_name": "Raid Volume", 00:22:23.182 "block_size": 512, 00:22:23.182 "num_blocks": 131072, 00:22:23.182 "uuid": "0c0a07c0-191f-4d4b-9989-34441a3b3712", 00:22:23.182 "assigned_rate_limits": { 00:22:23.182 "rw_ios_per_sec": 0, 00:22:23.182 "rw_mbytes_per_sec": 0, 00:22:23.182 "r_mbytes_per_sec": 0, 00:22:23.182 "w_mbytes_per_sec": 0 00:22:23.182 }, 00:22:23.182 "claimed": false, 00:22:23.182 "zoned": false, 00:22:23.182 "supported_io_types": { 00:22:23.182 "read": true, 00:22:23.182 "write": true, 00:22:23.182 "unmap": false, 00:22:23.182 "flush": false, 00:22:23.182 "reset": true, 00:22:23.182 "nvme_admin": false, 00:22:23.182 "nvme_io": false, 00:22:23.182 "nvme_io_md": false, 00:22:23.182 "write_zeroes": true, 00:22:23.182 "zcopy": false, 00:22:23.182 "get_zone_info": false, 00:22:23.182 "zone_management": false, 00:22:23.182 "zone_append": false, 00:22:23.182 "compare": false, 00:22:23.182 "compare_and_write": false, 00:22:23.182 "abort": false, 00:22:23.182 "seek_hole": false, 00:22:23.182 "seek_data": false, 00:22:23.182 "copy": false, 00:22:23.182 "nvme_iov_md": false 00:22:23.182 }, 00:22:23.182 "driver_specific": { 00:22:23.183 "raid": { 00:22:23.183 "uuid": "0c0a07c0-191f-4d4b-9989-34441a3b3712", 00:22:23.183 "strip_size_kb": 64, 00:22:23.183 "state": "online", 00:22:23.183 "raid_level": "raid5f", 00:22:23.183 "superblock": false, 00:22:23.183 "num_base_bdevs": 3, 00:22:23.183 "num_base_bdevs_discovered": 3, 00:22:23.183 "num_base_bdevs_operational": 3, 00:22:23.183 "base_bdevs_list": [ 00:22:23.183 { 00:22:23.183 "name": "NewBaseBdev", 00:22:23.183 "uuid": "bdee5bf8-7e65-4942-af84-a41390050968", 00:22:23.183 "is_configured": true, 00:22:23.183 "data_offset": 0, 00:22:23.183 "data_size": 65536 00:22:23.183 }, 00:22:23.183 { 00:22:23.183 "name": "BaseBdev2", 00:22:23.183 "uuid": "f877e75c-c929-4ce3-8a4d-2b3609042dc0", 00:22:23.183 "is_configured": true, 00:22:23.183 "data_offset": 0, 00:22:23.183 "data_size": 65536 00:22:23.183 }, 00:22:23.183 { 00:22:23.183 "name": "BaseBdev3", 00:22:23.183 "uuid": "687cbdd7-f76d-4e47-b052-9fdd1cacf209", 00:22:23.183 "is_configured": true, 00:22:23.183 "data_offset": 0, 00:22:23.183 "data_size": 65536 00:22:23.183 } 00:22:23.183 ] 00:22:23.183 } 00:22:23.183 } 00:22:23.183 }' 00:22:23.183 14:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:23.183 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:23.183 BaseBdev2 00:22:23.183 BaseBdev3' 00:22:23.183 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.441 [2024-11-04 14:54:53.276238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:23.441 [2024-11-04 14:54:53.276496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.441 [2024-11-04 14:54:53.276624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.441 [2024-11-04 14:54:53.276995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.441 [2024-11-04 14:54:53.277020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80370 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80370 ']' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80370 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80370 00:22:23.441 killing process with pid 80370 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80370' 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80370 00:22:23.441 [2024-11-04 14:54:53.315124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:23.441 14:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80370 00:22:24.007 [2024-11-04 14:54:53.594832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:24.941 00:22:24.941 real 0m11.972s 00:22:24.941 user 0m19.751s 00:22:24.941 sys 0m1.735s 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:24.941 ************************************ 00:22:24.941 END TEST raid5f_state_function_test 00:22:24.941 ************************************ 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.941 14:54:54 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:24.941 14:54:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:24.941 14:54:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:24.941 14:54:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.941 ************************************ 00:22:24.941 START TEST raid5f_state_function_test_sb 00:22:24.941 ************************************ 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:24.941 Process raid pid: 81011 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81011 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81011' 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81011 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81011 ']' 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.941 14:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.941 [2024-11-04 14:54:54.830591] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:22:24.941 [2024-11-04 14:54:54.830766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.199 [2024-11-04 14:54:55.017229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.457 [2024-11-04 14:54:55.158118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.715 [2024-11-04 14:54:55.370834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.715 [2024-11-04 14:54:55.370894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.281 [2024-11-04 14:54:55.877073] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:26.281 [2024-11-04 14:54:55.877437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:26.281 [2024-11-04 14:54:55.877469] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:26.281 [2024-11-04 14:54:55.877488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:26.281 [2024-11-04 14:54:55.877499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:26.281 [2024-11-04 14:54:55.877514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.281 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.282 "name": "Existed_Raid", 00:22:26.282 "uuid": "1bea3140-b649-41de-8caa-72fcdcfc8a51", 00:22:26.282 "strip_size_kb": 64, 00:22:26.282 "state": "configuring", 00:22:26.282 "raid_level": "raid5f", 00:22:26.282 "superblock": true, 00:22:26.282 "num_base_bdevs": 3, 00:22:26.282 "num_base_bdevs_discovered": 0, 00:22:26.282 "num_base_bdevs_operational": 3, 00:22:26.282 "base_bdevs_list": [ 00:22:26.282 { 00:22:26.282 "name": "BaseBdev1", 00:22:26.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.282 "is_configured": false, 00:22:26.282 "data_offset": 0, 00:22:26.282 "data_size": 0 00:22:26.282 }, 00:22:26.282 { 00:22:26.282 "name": "BaseBdev2", 00:22:26.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.282 "is_configured": false, 00:22:26.282 "data_offset": 0, 00:22:26.282 "data_size": 0 00:22:26.282 }, 00:22:26.282 { 00:22:26.282 "name": "BaseBdev3", 00:22:26.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.282 "is_configured": false, 00:22:26.282 "data_offset": 0, 00:22:26.282 "data_size": 0 00:22:26.282 } 00:22:26.282 ] 00:22:26.282 }' 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.282 14:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.540 [2024-11-04 14:54:56.381168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:26.540 [2024-11-04 14:54:56.381229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.540 [2024-11-04 14:54:56.389131] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:26.540 [2024-11-04 14:54:56.389378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:26.540 [2024-11-04 14:54:56.389406] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:26.540 [2024-11-04 14:54:56.389425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:26.540 [2024-11-04 14:54:56.389436] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:26.540 [2024-11-04 14:54:56.389451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.540 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.797 [2024-11-04 14:54:56.434974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.797 BaseBdev1 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:26.797 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.798 [ 00:22:26.798 { 00:22:26.798 "name": "BaseBdev1", 00:22:26.798 "aliases": [ 00:22:26.798 "b45c0d03-9499-469d-a418-2ded903db2f5" 00:22:26.798 ], 00:22:26.798 "product_name": "Malloc disk", 00:22:26.798 "block_size": 512, 00:22:26.798 "num_blocks": 65536, 00:22:26.798 "uuid": "b45c0d03-9499-469d-a418-2ded903db2f5", 00:22:26.798 "assigned_rate_limits": { 00:22:26.798 "rw_ios_per_sec": 0, 00:22:26.798 "rw_mbytes_per_sec": 0, 00:22:26.798 "r_mbytes_per_sec": 0, 00:22:26.798 "w_mbytes_per_sec": 0 00:22:26.798 }, 00:22:26.798 "claimed": true, 00:22:26.798 "claim_type": "exclusive_write", 00:22:26.798 "zoned": false, 00:22:26.798 "supported_io_types": { 00:22:26.798 "read": true, 00:22:26.798 "write": true, 00:22:26.798 "unmap": true, 00:22:26.798 "flush": true, 00:22:26.798 "reset": true, 00:22:26.798 "nvme_admin": false, 00:22:26.798 "nvme_io": false, 00:22:26.798 "nvme_io_md": false, 00:22:26.798 "write_zeroes": true, 00:22:26.798 "zcopy": true, 00:22:26.798 "get_zone_info": false, 00:22:26.798 "zone_management": false, 00:22:26.798 "zone_append": false, 00:22:26.798 "compare": false, 00:22:26.798 "compare_and_write": false, 00:22:26.798 "abort": true, 00:22:26.798 "seek_hole": false, 00:22:26.798 "seek_data": false, 00:22:26.798 "copy": true, 00:22:26.798 "nvme_iov_md": false 00:22:26.798 }, 00:22:26.798 "memory_domains": [ 00:22:26.798 { 00:22:26.798 "dma_device_id": "system", 00:22:26.798 "dma_device_type": 1 00:22:26.798 }, 00:22:26.798 { 00:22:26.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.798 "dma_device_type": 2 00:22:26.798 } 00:22:26.798 ], 00:22:26.798 "driver_specific": {} 00:22:26.798 } 00:22:26.798 ] 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.798 "name": "Existed_Raid", 00:22:26.798 "uuid": "95d4611b-9efb-4c7a-83d9-c315718f1ba4", 00:22:26.798 "strip_size_kb": 64, 00:22:26.798 "state": "configuring", 00:22:26.798 "raid_level": "raid5f", 00:22:26.798 "superblock": true, 00:22:26.798 "num_base_bdevs": 3, 00:22:26.798 "num_base_bdevs_discovered": 1, 00:22:26.798 "num_base_bdevs_operational": 3, 00:22:26.798 "base_bdevs_list": [ 00:22:26.798 { 00:22:26.798 "name": "BaseBdev1", 00:22:26.798 "uuid": "b45c0d03-9499-469d-a418-2ded903db2f5", 00:22:26.798 "is_configured": true, 00:22:26.798 "data_offset": 2048, 00:22:26.798 "data_size": 63488 00:22:26.798 }, 00:22:26.798 { 00:22:26.798 "name": "BaseBdev2", 00:22:26.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.798 "is_configured": false, 00:22:26.798 "data_offset": 0, 00:22:26.798 "data_size": 0 00:22:26.798 }, 00:22:26.798 { 00:22:26.798 "name": "BaseBdev3", 00:22:26.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.798 "is_configured": false, 00:22:26.798 "data_offset": 0, 00:22:26.798 "data_size": 0 00:22:26.798 } 00:22:26.798 ] 00:22:26.798 }' 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.798 14:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.363 [2024-11-04 14:54:57.027266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:27.363 [2024-11-04 14:54:57.027667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.363 [2024-11-04 14:54:57.035312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:27.363 [2024-11-04 14:54:57.038186] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:27.363 [2024-11-04 14:54:57.038258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:27.363 [2024-11-04 14:54:57.038278] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:27.363 [2024-11-04 14:54:57.038295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.363 "name": "Existed_Raid", 00:22:27.363 "uuid": "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241", 00:22:27.363 "strip_size_kb": 64, 00:22:27.363 "state": "configuring", 00:22:27.363 "raid_level": "raid5f", 00:22:27.363 "superblock": true, 00:22:27.363 "num_base_bdevs": 3, 00:22:27.363 "num_base_bdevs_discovered": 1, 00:22:27.363 "num_base_bdevs_operational": 3, 00:22:27.363 "base_bdevs_list": [ 00:22:27.363 { 00:22:27.363 "name": "BaseBdev1", 00:22:27.363 "uuid": "b45c0d03-9499-469d-a418-2ded903db2f5", 00:22:27.363 "is_configured": true, 00:22:27.363 "data_offset": 2048, 00:22:27.363 "data_size": 63488 00:22:27.363 }, 00:22:27.363 { 00:22:27.363 "name": "BaseBdev2", 00:22:27.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.363 "is_configured": false, 00:22:27.363 "data_offset": 0, 00:22:27.363 "data_size": 0 00:22:27.363 }, 00:22:27.363 { 00:22:27.363 "name": "BaseBdev3", 00:22:27.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.363 "is_configured": false, 00:22:27.363 "data_offset": 0, 00:22:27.363 "data_size": 0 00:22:27.363 } 00:22:27.363 ] 00:22:27.363 }' 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.363 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.929 [2024-11-04 14:54:57.622810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:27.929 BaseBdev2 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.929 [ 00:22:27.929 { 00:22:27.929 "name": "BaseBdev2", 00:22:27.929 "aliases": [ 00:22:27.929 "5be8e0ab-c312-4f73-930f-605160b796dc" 00:22:27.929 ], 00:22:27.929 "product_name": "Malloc disk", 00:22:27.929 "block_size": 512, 00:22:27.929 "num_blocks": 65536, 00:22:27.929 "uuid": "5be8e0ab-c312-4f73-930f-605160b796dc", 00:22:27.929 "assigned_rate_limits": { 00:22:27.929 "rw_ios_per_sec": 0, 00:22:27.929 "rw_mbytes_per_sec": 0, 00:22:27.929 "r_mbytes_per_sec": 0, 00:22:27.929 "w_mbytes_per_sec": 0 00:22:27.929 }, 00:22:27.929 "claimed": true, 00:22:27.929 "claim_type": "exclusive_write", 00:22:27.929 "zoned": false, 00:22:27.929 "supported_io_types": { 00:22:27.929 "read": true, 00:22:27.929 "write": true, 00:22:27.929 "unmap": true, 00:22:27.929 "flush": true, 00:22:27.929 "reset": true, 00:22:27.929 "nvme_admin": false, 00:22:27.929 "nvme_io": false, 00:22:27.929 "nvme_io_md": false, 00:22:27.929 "write_zeroes": true, 00:22:27.929 "zcopy": true, 00:22:27.929 "get_zone_info": false, 00:22:27.929 "zone_management": false, 00:22:27.929 "zone_append": false, 00:22:27.929 "compare": false, 00:22:27.929 "compare_and_write": false, 00:22:27.929 "abort": true, 00:22:27.929 "seek_hole": false, 00:22:27.929 "seek_data": false, 00:22:27.929 "copy": true, 00:22:27.929 "nvme_iov_md": false 00:22:27.929 }, 00:22:27.929 "memory_domains": [ 00:22:27.929 { 00:22:27.929 "dma_device_id": "system", 00:22:27.929 "dma_device_type": 1 00:22:27.929 }, 00:22:27.929 { 00:22:27.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.929 "dma_device_type": 2 00:22:27.929 } 00:22:27.929 ], 00:22:27.929 "driver_specific": {} 00:22:27.929 } 00:22:27.929 ] 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.929 "name": "Existed_Raid", 00:22:27.929 "uuid": "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241", 00:22:27.929 "strip_size_kb": 64, 00:22:27.929 "state": "configuring", 00:22:27.929 "raid_level": "raid5f", 00:22:27.929 "superblock": true, 00:22:27.929 "num_base_bdevs": 3, 00:22:27.929 "num_base_bdevs_discovered": 2, 00:22:27.929 "num_base_bdevs_operational": 3, 00:22:27.929 "base_bdevs_list": [ 00:22:27.929 { 00:22:27.929 "name": "BaseBdev1", 00:22:27.929 "uuid": "b45c0d03-9499-469d-a418-2ded903db2f5", 00:22:27.929 "is_configured": true, 00:22:27.929 "data_offset": 2048, 00:22:27.929 "data_size": 63488 00:22:27.929 }, 00:22:27.929 { 00:22:27.929 "name": "BaseBdev2", 00:22:27.929 "uuid": "5be8e0ab-c312-4f73-930f-605160b796dc", 00:22:27.929 "is_configured": true, 00:22:27.929 "data_offset": 2048, 00:22:27.929 "data_size": 63488 00:22:27.929 }, 00:22:27.929 { 00:22:27.929 "name": "BaseBdev3", 00:22:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.929 "is_configured": false, 00:22:27.929 "data_offset": 0, 00:22:27.929 "data_size": 0 00:22:27.929 } 00:22:27.929 ] 00:22:27.929 }' 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.929 14:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.496 [2024-11-04 14:54:58.234583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.496 [2024-11-04 14:54:58.235079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:28.496 [2024-11-04 14:54:58.235118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:28.496 BaseBdev3 00:22:28.496 [2024-11-04 14:54:58.235528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.496 [2024-11-04 14:54:58.241179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:28.496 [2024-11-04 14:54:58.241207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:28.496 [2024-11-04 14:54:58.241659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.496 [ 00:22:28.496 { 00:22:28.496 "name": "BaseBdev3", 00:22:28.496 "aliases": [ 00:22:28.496 "cf89572a-1b6b-4923-8416-cb5540aa1cf9" 00:22:28.496 ], 00:22:28.496 "product_name": "Malloc disk", 00:22:28.496 "block_size": 512, 00:22:28.496 "num_blocks": 65536, 00:22:28.496 "uuid": "cf89572a-1b6b-4923-8416-cb5540aa1cf9", 00:22:28.496 "assigned_rate_limits": { 00:22:28.496 "rw_ios_per_sec": 0, 00:22:28.496 "rw_mbytes_per_sec": 0, 00:22:28.496 "r_mbytes_per_sec": 0, 00:22:28.496 "w_mbytes_per_sec": 0 00:22:28.496 }, 00:22:28.496 "claimed": true, 00:22:28.496 "claim_type": "exclusive_write", 00:22:28.496 "zoned": false, 00:22:28.496 "supported_io_types": { 00:22:28.496 "read": true, 00:22:28.496 "write": true, 00:22:28.496 "unmap": true, 00:22:28.496 "flush": true, 00:22:28.496 "reset": true, 00:22:28.496 "nvme_admin": false, 00:22:28.496 "nvme_io": false, 00:22:28.496 "nvme_io_md": false, 00:22:28.496 "write_zeroes": true, 00:22:28.496 "zcopy": true, 00:22:28.496 "get_zone_info": false, 00:22:28.496 "zone_management": false, 00:22:28.496 "zone_append": false, 00:22:28.496 "compare": false, 00:22:28.496 "compare_and_write": false, 00:22:28.496 "abort": true, 00:22:28.496 "seek_hole": false, 00:22:28.496 "seek_data": false, 00:22:28.496 "copy": true, 00:22:28.496 "nvme_iov_md": false 00:22:28.496 }, 00:22:28.496 "memory_domains": [ 00:22:28.496 { 00:22:28.496 "dma_device_id": "system", 00:22:28.496 "dma_device_type": 1 00:22:28.496 }, 00:22:28.496 { 00:22:28.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.496 "dma_device_type": 2 00:22:28.496 } 00:22:28.496 ], 00:22:28.496 "driver_specific": {} 00:22:28.496 } 00:22:28.496 ] 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:28.496 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.497 "name": "Existed_Raid", 00:22:28.497 "uuid": "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241", 00:22:28.497 "strip_size_kb": 64, 00:22:28.497 "state": "online", 00:22:28.497 "raid_level": "raid5f", 00:22:28.497 "superblock": true, 00:22:28.497 "num_base_bdevs": 3, 00:22:28.497 "num_base_bdevs_discovered": 3, 00:22:28.497 "num_base_bdevs_operational": 3, 00:22:28.497 "base_bdevs_list": [ 00:22:28.497 { 00:22:28.497 "name": "BaseBdev1", 00:22:28.497 "uuid": "b45c0d03-9499-469d-a418-2ded903db2f5", 00:22:28.497 "is_configured": true, 00:22:28.497 "data_offset": 2048, 00:22:28.497 "data_size": 63488 00:22:28.497 }, 00:22:28.497 { 00:22:28.497 "name": "BaseBdev2", 00:22:28.497 "uuid": "5be8e0ab-c312-4f73-930f-605160b796dc", 00:22:28.497 "is_configured": true, 00:22:28.497 "data_offset": 2048, 00:22:28.497 "data_size": 63488 00:22:28.497 }, 00:22:28.497 { 00:22:28.497 "name": "BaseBdev3", 00:22:28.497 "uuid": "cf89572a-1b6b-4923-8416-cb5540aa1cf9", 00:22:28.497 "is_configured": true, 00:22:28.497 "data_offset": 2048, 00:22:28.497 "data_size": 63488 00:22:28.497 } 00:22:28.497 ] 00:22:28.497 }' 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.497 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.062 [2024-11-04 14:54:58.800355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.062 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:29.062 "name": "Existed_Raid", 00:22:29.062 "aliases": [ 00:22:29.062 "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241" 00:22:29.062 ], 00:22:29.062 "product_name": "Raid Volume", 00:22:29.062 "block_size": 512, 00:22:29.062 "num_blocks": 126976, 00:22:29.062 "uuid": "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241", 00:22:29.062 "assigned_rate_limits": { 00:22:29.062 "rw_ios_per_sec": 0, 00:22:29.062 "rw_mbytes_per_sec": 0, 00:22:29.062 "r_mbytes_per_sec": 0, 00:22:29.062 "w_mbytes_per_sec": 0 00:22:29.062 }, 00:22:29.062 "claimed": false, 00:22:29.062 "zoned": false, 00:22:29.062 "supported_io_types": { 00:22:29.062 "read": true, 00:22:29.062 "write": true, 00:22:29.062 "unmap": false, 00:22:29.062 "flush": false, 00:22:29.062 "reset": true, 00:22:29.062 "nvme_admin": false, 00:22:29.062 "nvme_io": false, 00:22:29.062 "nvme_io_md": false, 00:22:29.062 "write_zeroes": true, 00:22:29.062 "zcopy": false, 00:22:29.062 "get_zone_info": false, 00:22:29.062 "zone_management": false, 00:22:29.062 "zone_append": false, 00:22:29.062 "compare": false, 00:22:29.062 "compare_and_write": false, 00:22:29.062 "abort": false, 00:22:29.062 "seek_hole": false, 00:22:29.062 "seek_data": false, 00:22:29.062 "copy": false, 00:22:29.062 "nvme_iov_md": false 00:22:29.062 }, 00:22:29.062 "driver_specific": { 00:22:29.062 "raid": { 00:22:29.062 "uuid": "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241", 00:22:29.062 "strip_size_kb": 64, 00:22:29.062 "state": "online", 00:22:29.062 "raid_level": "raid5f", 00:22:29.062 "superblock": true, 00:22:29.062 "num_base_bdevs": 3, 00:22:29.062 "num_base_bdevs_discovered": 3, 00:22:29.062 "num_base_bdevs_operational": 3, 00:22:29.062 "base_bdevs_list": [ 00:22:29.062 { 00:22:29.062 "name": "BaseBdev1", 00:22:29.062 "uuid": "b45c0d03-9499-469d-a418-2ded903db2f5", 00:22:29.062 "is_configured": true, 00:22:29.062 "data_offset": 2048, 00:22:29.062 "data_size": 63488 00:22:29.062 }, 00:22:29.062 { 00:22:29.062 "name": "BaseBdev2", 00:22:29.062 "uuid": "5be8e0ab-c312-4f73-930f-605160b796dc", 00:22:29.062 "is_configured": true, 00:22:29.062 "data_offset": 2048, 00:22:29.063 "data_size": 63488 00:22:29.063 }, 00:22:29.063 { 00:22:29.063 "name": "BaseBdev3", 00:22:29.063 "uuid": "cf89572a-1b6b-4923-8416-cb5540aa1cf9", 00:22:29.063 "is_configured": true, 00:22:29.063 "data_offset": 2048, 00:22:29.063 "data_size": 63488 00:22:29.063 } 00:22:29.063 ] 00:22:29.063 } 00:22:29.063 } 00:22:29.063 }' 00:22:29.063 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:29.063 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:29.063 BaseBdev2 00:22:29.063 BaseBdev3' 00:22:29.063 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.321 14:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.321 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.321 [2024-11-04 14:54:59.144197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.579 "name": "Existed_Raid", 00:22:29.579 "uuid": "0b10b4ff-f9a5-41c7-a9d2-bbeec0564241", 00:22:29.579 "strip_size_kb": 64, 00:22:29.579 "state": "online", 00:22:29.579 "raid_level": "raid5f", 00:22:29.579 "superblock": true, 00:22:29.579 "num_base_bdevs": 3, 00:22:29.579 "num_base_bdevs_discovered": 2, 00:22:29.579 "num_base_bdevs_operational": 2, 00:22:29.579 "base_bdevs_list": [ 00:22:29.579 { 00:22:29.579 "name": null, 00:22:29.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.579 "is_configured": false, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 63488 00:22:29.579 }, 00:22:29.579 { 00:22:29.579 "name": "BaseBdev2", 00:22:29.579 "uuid": "5be8e0ab-c312-4f73-930f-605160b796dc", 00:22:29.579 "is_configured": true, 00:22:29.579 "data_offset": 2048, 00:22:29.579 "data_size": 63488 00:22:29.579 }, 00:22:29.579 { 00:22:29.579 "name": "BaseBdev3", 00:22:29.579 "uuid": "cf89572a-1b6b-4923-8416-cb5540aa1cf9", 00:22:29.579 "is_configured": true, 00:22:29.579 "data_offset": 2048, 00:22:29.579 "data_size": 63488 00:22:29.579 } 00:22:29.579 ] 00:22:29.579 }' 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.579 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 [2024-11-04 14:54:59.874992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:30.144 [2024-11-04 14:54:59.875356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:30.144 [2024-11-04 14:54:59.962689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 14:54:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.144 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:30.144 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:30.144 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:30.144 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.144 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 [2024-11-04 14:55:00.018669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:30.144 [2024-11-04 14:55:00.018726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.401 BaseBdev2 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.401 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.401 [ 00:22:30.401 { 00:22:30.401 "name": "BaseBdev2", 00:22:30.401 "aliases": [ 00:22:30.401 "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7" 00:22:30.401 ], 00:22:30.401 "product_name": "Malloc disk", 00:22:30.401 "block_size": 512, 00:22:30.401 "num_blocks": 65536, 00:22:30.401 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:30.401 "assigned_rate_limits": { 00:22:30.401 "rw_ios_per_sec": 0, 00:22:30.401 "rw_mbytes_per_sec": 0, 00:22:30.401 "r_mbytes_per_sec": 0, 00:22:30.401 "w_mbytes_per_sec": 0 00:22:30.401 }, 00:22:30.402 "claimed": false, 00:22:30.402 "zoned": false, 00:22:30.402 "supported_io_types": { 00:22:30.402 "read": true, 00:22:30.402 "write": true, 00:22:30.402 "unmap": true, 00:22:30.402 "flush": true, 00:22:30.402 "reset": true, 00:22:30.402 "nvme_admin": false, 00:22:30.402 "nvme_io": false, 00:22:30.402 "nvme_io_md": false, 00:22:30.402 "write_zeroes": true, 00:22:30.402 "zcopy": true, 00:22:30.402 "get_zone_info": false, 00:22:30.402 "zone_management": false, 00:22:30.402 "zone_append": false, 00:22:30.402 "compare": false, 00:22:30.402 "compare_and_write": false, 00:22:30.402 "abort": true, 00:22:30.402 "seek_hole": false, 00:22:30.402 "seek_data": false, 00:22:30.402 "copy": true, 00:22:30.402 "nvme_iov_md": false 00:22:30.402 }, 00:22:30.402 "memory_domains": [ 00:22:30.402 { 00:22:30.402 "dma_device_id": "system", 00:22:30.402 "dma_device_type": 1 00:22:30.402 }, 00:22:30.402 { 00:22:30.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.402 "dma_device_type": 2 00:22:30.402 } 00:22:30.402 ], 00:22:30.402 "driver_specific": {} 00:22:30.402 } 00:22:30.402 ] 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.402 BaseBdev3 00:22:30.402 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.660 [ 00:22:30.660 { 00:22:30.660 "name": "BaseBdev3", 00:22:30.660 "aliases": [ 00:22:30.660 "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d" 00:22:30.660 ], 00:22:30.660 "product_name": "Malloc disk", 00:22:30.660 "block_size": 512, 00:22:30.660 "num_blocks": 65536, 00:22:30.660 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:30.660 "assigned_rate_limits": { 00:22:30.660 "rw_ios_per_sec": 0, 00:22:30.660 "rw_mbytes_per_sec": 0, 00:22:30.660 "r_mbytes_per_sec": 0, 00:22:30.660 "w_mbytes_per_sec": 0 00:22:30.660 }, 00:22:30.660 "claimed": false, 00:22:30.660 "zoned": false, 00:22:30.660 "supported_io_types": { 00:22:30.660 "read": true, 00:22:30.660 "write": true, 00:22:30.660 "unmap": true, 00:22:30.660 "flush": true, 00:22:30.660 "reset": true, 00:22:30.660 "nvme_admin": false, 00:22:30.660 "nvme_io": false, 00:22:30.660 "nvme_io_md": false, 00:22:30.660 "write_zeroes": true, 00:22:30.660 "zcopy": true, 00:22:30.660 "get_zone_info": false, 00:22:30.660 "zone_management": false, 00:22:30.660 "zone_append": false, 00:22:30.660 "compare": false, 00:22:30.660 "compare_and_write": false, 00:22:30.660 "abort": true, 00:22:30.660 "seek_hole": false, 00:22:30.660 "seek_data": false, 00:22:30.660 "copy": true, 00:22:30.660 "nvme_iov_md": false 00:22:30.660 }, 00:22:30.660 "memory_domains": [ 00:22:30.660 { 00:22:30.660 "dma_device_id": "system", 00:22:30.660 "dma_device_type": 1 00:22:30.660 }, 00:22:30.660 { 00:22:30.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.660 "dma_device_type": 2 00:22:30.660 } 00:22:30.660 ], 00:22:30.660 "driver_specific": {} 00:22:30.660 } 00:22:30.660 ] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.660 [2024-11-04 14:55:00.329488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:30.660 [2024-11-04 14:55:00.329571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:30.660 [2024-11-04 14:55:00.329624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.660 [2024-11-04 14:55:00.332187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.660 "name": "Existed_Raid", 00:22:30.660 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:30.660 "strip_size_kb": 64, 00:22:30.660 "state": "configuring", 00:22:30.660 "raid_level": "raid5f", 00:22:30.660 "superblock": true, 00:22:30.660 "num_base_bdevs": 3, 00:22:30.660 "num_base_bdevs_discovered": 2, 00:22:30.660 "num_base_bdevs_operational": 3, 00:22:30.660 "base_bdevs_list": [ 00:22:30.660 { 00:22:30.660 "name": "BaseBdev1", 00:22:30.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.660 "is_configured": false, 00:22:30.660 "data_offset": 0, 00:22:30.660 "data_size": 0 00:22:30.660 }, 00:22:30.660 { 00:22:30.660 "name": "BaseBdev2", 00:22:30.660 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:30.660 "is_configured": true, 00:22:30.660 "data_offset": 2048, 00:22:30.660 "data_size": 63488 00:22:30.660 }, 00:22:30.660 { 00:22:30.660 "name": "BaseBdev3", 00:22:30.660 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:30.660 "is_configured": true, 00:22:30.660 "data_offset": 2048, 00:22:30.660 "data_size": 63488 00:22:30.660 } 00:22:30.660 ] 00:22:30.660 }' 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.660 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 [2024-11-04 14:55:00.873670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.227 "name": "Existed_Raid", 00:22:31.227 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:31.227 "strip_size_kb": 64, 00:22:31.227 "state": "configuring", 00:22:31.227 "raid_level": "raid5f", 00:22:31.227 "superblock": true, 00:22:31.227 "num_base_bdevs": 3, 00:22:31.227 "num_base_bdevs_discovered": 1, 00:22:31.227 "num_base_bdevs_operational": 3, 00:22:31.227 "base_bdevs_list": [ 00:22:31.227 { 00:22:31.227 "name": "BaseBdev1", 00:22:31.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.227 "is_configured": false, 00:22:31.227 "data_offset": 0, 00:22:31.227 "data_size": 0 00:22:31.227 }, 00:22:31.227 { 00:22:31.227 "name": null, 00:22:31.227 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:31.227 "is_configured": false, 00:22:31.227 "data_offset": 0, 00:22:31.227 "data_size": 63488 00:22:31.227 }, 00:22:31.227 { 00:22:31.227 "name": "BaseBdev3", 00:22:31.227 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:31.227 "is_configured": true, 00:22:31.227 "data_offset": 2048, 00:22:31.227 "data_size": 63488 00:22:31.227 } 00:22:31.227 ] 00:22:31.227 }' 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.227 14:55:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.793 [2024-11-04 14:55:01.553271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.793 BaseBdev1 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.793 [ 00:22:31.793 { 00:22:31.793 "name": "BaseBdev1", 00:22:31.793 "aliases": [ 00:22:31.793 "aee5e883-8039-4fe5-a124-d5170d4c062d" 00:22:31.793 ], 00:22:31.793 "product_name": "Malloc disk", 00:22:31.793 "block_size": 512, 00:22:31.793 "num_blocks": 65536, 00:22:31.793 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:31.793 "assigned_rate_limits": { 00:22:31.793 "rw_ios_per_sec": 0, 00:22:31.793 "rw_mbytes_per_sec": 0, 00:22:31.793 "r_mbytes_per_sec": 0, 00:22:31.793 "w_mbytes_per_sec": 0 00:22:31.793 }, 00:22:31.793 "claimed": true, 00:22:31.793 "claim_type": "exclusive_write", 00:22:31.793 "zoned": false, 00:22:31.793 "supported_io_types": { 00:22:31.793 "read": true, 00:22:31.793 "write": true, 00:22:31.793 "unmap": true, 00:22:31.793 "flush": true, 00:22:31.793 "reset": true, 00:22:31.793 "nvme_admin": false, 00:22:31.793 "nvme_io": false, 00:22:31.793 "nvme_io_md": false, 00:22:31.793 "write_zeroes": true, 00:22:31.793 "zcopy": true, 00:22:31.793 "get_zone_info": false, 00:22:31.793 "zone_management": false, 00:22:31.793 "zone_append": false, 00:22:31.793 "compare": false, 00:22:31.793 "compare_and_write": false, 00:22:31.793 "abort": true, 00:22:31.793 "seek_hole": false, 00:22:31.793 "seek_data": false, 00:22:31.793 "copy": true, 00:22:31.793 "nvme_iov_md": false 00:22:31.793 }, 00:22:31.793 "memory_domains": [ 00:22:31.793 { 00:22:31.793 "dma_device_id": "system", 00:22:31.793 "dma_device_type": 1 00:22:31.793 }, 00:22:31.793 { 00:22:31.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.793 "dma_device_type": 2 00:22:31.793 } 00:22:31.793 ], 00:22:31.793 "driver_specific": {} 00:22:31.793 } 00:22:31.793 ] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.793 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.794 "name": "Existed_Raid", 00:22:31.794 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:31.794 "strip_size_kb": 64, 00:22:31.794 "state": "configuring", 00:22:31.794 "raid_level": "raid5f", 00:22:31.794 "superblock": true, 00:22:31.794 "num_base_bdevs": 3, 00:22:31.794 "num_base_bdevs_discovered": 2, 00:22:31.794 "num_base_bdevs_operational": 3, 00:22:31.794 "base_bdevs_list": [ 00:22:31.794 { 00:22:31.794 "name": "BaseBdev1", 00:22:31.794 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:31.794 "is_configured": true, 00:22:31.794 "data_offset": 2048, 00:22:31.794 "data_size": 63488 00:22:31.794 }, 00:22:31.794 { 00:22:31.794 "name": null, 00:22:31.794 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:31.794 "is_configured": false, 00:22:31.794 "data_offset": 0, 00:22:31.794 "data_size": 63488 00:22:31.794 }, 00:22:31.794 { 00:22:31.794 "name": "BaseBdev3", 00:22:31.794 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:31.794 "is_configured": true, 00:22:31.794 "data_offset": 2048, 00:22:31.794 "data_size": 63488 00:22:31.794 } 00:22:31.794 ] 00:22:31.794 }' 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.794 14:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 [2024-11-04 14:55:02.213630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.359 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.616 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.616 "name": "Existed_Raid", 00:22:32.616 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:32.616 "strip_size_kb": 64, 00:22:32.616 "state": "configuring", 00:22:32.616 "raid_level": "raid5f", 00:22:32.616 "superblock": true, 00:22:32.616 "num_base_bdevs": 3, 00:22:32.616 "num_base_bdevs_discovered": 1, 00:22:32.616 "num_base_bdevs_operational": 3, 00:22:32.616 "base_bdevs_list": [ 00:22:32.616 { 00:22:32.616 "name": "BaseBdev1", 00:22:32.616 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:32.616 "is_configured": true, 00:22:32.616 "data_offset": 2048, 00:22:32.616 "data_size": 63488 00:22:32.616 }, 00:22:32.616 { 00:22:32.616 "name": null, 00:22:32.616 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:32.616 "is_configured": false, 00:22:32.616 "data_offset": 0, 00:22:32.616 "data_size": 63488 00:22:32.616 }, 00:22:32.616 { 00:22:32.616 "name": null, 00:22:32.616 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:32.616 "is_configured": false, 00:22:32.616 "data_offset": 0, 00:22:32.616 "data_size": 63488 00:22:32.616 } 00:22:32.616 ] 00:22:32.616 }' 00:22:32.616 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.616 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.873 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.873 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:32.873 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.874 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.132 [2024-11-04 14:55:02.817796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.132 "name": "Existed_Raid", 00:22:33.132 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:33.132 "strip_size_kb": 64, 00:22:33.132 "state": "configuring", 00:22:33.132 "raid_level": "raid5f", 00:22:33.132 "superblock": true, 00:22:33.132 "num_base_bdevs": 3, 00:22:33.132 "num_base_bdevs_discovered": 2, 00:22:33.132 "num_base_bdevs_operational": 3, 00:22:33.132 "base_bdevs_list": [ 00:22:33.132 { 00:22:33.132 "name": "BaseBdev1", 00:22:33.132 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:33.132 "is_configured": true, 00:22:33.132 "data_offset": 2048, 00:22:33.132 "data_size": 63488 00:22:33.132 }, 00:22:33.132 { 00:22:33.132 "name": null, 00:22:33.132 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:33.132 "is_configured": false, 00:22:33.132 "data_offset": 0, 00:22:33.132 "data_size": 63488 00:22:33.132 }, 00:22:33.132 { 00:22:33.132 "name": "BaseBdev3", 00:22:33.132 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:33.132 "is_configured": true, 00:22:33.132 "data_offset": 2048, 00:22:33.132 "data_size": 63488 00:22:33.132 } 00:22:33.132 ] 00:22:33.132 }' 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.132 14:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.699 [2024-11-04 14:55:03.421963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.699 "name": "Existed_Raid", 00:22:33.699 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:33.699 "strip_size_kb": 64, 00:22:33.699 "state": "configuring", 00:22:33.699 "raid_level": "raid5f", 00:22:33.699 "superblock": true, 00:22:33.699 "num_base_bdevs": 3, 00:22:33.699 "num_base_bdevs_discovered": 1, 00:22:33.699 "num_base_bdevs_operational": 3, 00:22:33.699 "base_bdevs_list": [ 00:22:33.699 { 00:22:33.699 "name": null, 00:22:33.699 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:33.699 "is_configured": false, 00:22:33.699 "data_offset": 0, 00:22:33.699 "data_size": 63488 00:22:33.699 }, 00:22:33.699 { 00:22:33.699 "name": null, 00:22:33.699 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:33.699 "is_configured": false, 00:22:33.699 "data_offset": 0, 00:22:33.699 "data_size": 63488 00:22:33.699 }, 00:22:33.699 { 00:22:33.699 "name": "BaseBdev3", 00:22:33.699 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:33.699 "is_configured": true, 00:22:33.699 "data_offset": 2048, 00:22:33.699 "data_size": 63488 00:22:33.699 } 00:22:33.699 ] 00:22:33.699 }' 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.699 14:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.264 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.264 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:34.264 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.264 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.265 [2024-11-04 14:55:04.132113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.265 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.522 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.522 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.522 "name": "Existed_Raid", 00:22:34.522 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:34.522 "strip_size_kb": 64, 00:22:34.522 "state": "configuring", 00:22:34.522 "raid_level": "raid5f", 00:22:34.522 "superblock": true, 00:22:34.522 "num_base_bdevs": 3, 00:22:34.522 "num_base_bdevs_discovered": 2, 00:22:34.522 "num_base_bdevs_operational": 3, 00:22:34.522 "base_bdevs_list": [ 00:22:34.522 { 00:22:34.522 "name": null, 00:22:34.522 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:34.522 "is_configured": false, 00:22:34.522 "data_offset": 0, 00:22:34.522 "data_size": 63488 00:22:34.522 }, 00:22:34.522 { 00:22:34.522 "name": "BaseBdev2", 00:22:34.522 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:34.522 "is_configured": true, 00:22:34.522 "data_offset": 2048, 00:22:34.522 "data_size": 63488 00:22:34.522 }, 00:22:34.522 { 00:22:34.522 "name": "BaseBdev3", 00:22:34.522 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:34.522 "is_configured": true, 00:22:34.522 "data_offset": 2048, 00:22:34.522 "data_size": 63488 00:22:34.522 } 00:22:34.522 ] 00:22:34.522 }' 00:22:34.522 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.522 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aee5e883-8039-4fe5-a124-d5170d4c062d 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.100 [2024-11-04 14:55:04.834091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:35.100 [2024-11-04 14:55:04.834451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:35.100 [2024-11-04 14:55:04.834477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:35.100 NewBaseBdev 00:22:35.100 [2024-11-04 14:55:04.834804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.100 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 [2024-11-04 14:55:04.839827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:35.101 [2024-11-04 14:55:04.839855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:35.101 [2024-11-04 14:55:04.840177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 [ 00:22:35.101 { 00:22:35.101 "name": "NewBaseBdev", 00:22:35.101 "aliases": [ 00:22:35.101 "aee5e883-8039-4fe5-a124-d5170d4c062d" 00:22:35.101 ], 00:22:35.101 "product_name": "Malloc disk", 00:22:35.101 "block_size": 512, 00:22:35.101 "num_blocks": 65536, 00:22:35.101 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:35.101 "assigned_rate_limits": { 00:22:35.101 "rw_ios_per_sec": 0, 00:22:35.101 "rw_mbytes_per_sec": 0, 00:22:35.101 "r_mbytes_per_sec": 0, 00:22:35.101 "w_mbytes_per_sec": 0 00:22:35.101 }, 00:22:35.101 "claimed": true, 00:22:35.101 "claim_type": "exclusive_write", 00:22:35.101 "zoned": false, 00:22:35.101 "supported_io_types": { 00:22:35.101 "read": true, 00:22:35.101 "write": true, 00:22:35.101 "unmap": true, 00:22:35.101 "flush": true, 00:22:35.101 "reset": true, 00:22:35.101 "nvme_admin": false, 00:22:35.101 "nvme_io": false, 00:22:35.101 "nvme_io_md": false, 00:22:35.101 "write_zeroes": true, 00:22:35.101 "zcopy": true, 00:22:35.101 "get_zone_info": false, 00:22:35.101 "zone_management": false, 00:22:35.101 "zone_append": false, 00:22:35.101 "compare": false, 00:22:35.101 "compare_and_write": false, 00:22:35.101 "abort": true, 00:22:35.101 "seek_hole": false, 00:22:35.101 "seek_data": false, 00:22:35.101 "copy": true, 00:22:35.101 "nvme_iov_md": false 00:22:35.101 }, 00:22:35.101 "memory_domains": [ 00:22:35.101 { 00:22:35.101 "dma_device_id": "system", 00:22:35.101 "dma_device_type": 1 00:22:35.101 }, 00:22:35.101 { 00:22:35.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.101 "dma_device_type": 2 00:22:35.101 } 00:22:35.101 ], 00:22:35.101 "driver_specific": {} 00:22:35.101 } 00:22:35.101 ] 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.101 "name": "Existed_Raid", 00:22:35.101 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:35.101 "strip_size_kb": 64, 00:22:35.101 "state": "online", 00:22:35.101 "raid_level": "raid5f", 00:22:35.101 "superblock": true, 00:22:35.101 "num_base_bdevs": 3, 00:22:35.101 "num_base_bdevs_discovered": 3, 00:22:35.101 "num_base_bdevs_operational": 3, 00:22:35.101 "base_bdevs_list": [ 00:22:35.101 { 00:22:35.101 "name": "NewBaseBdev", 00:22:35.101 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:35.101 "is_configured": true, 00:22:35.101 "data_offset": 2048, 00:22:35.101 "data_size": 63488 00:22:35.101 }, 00:22:35.101 { 00:22:35.101 "name": "BaseBdev2", 00:22:35.101 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:35.101 "is_configured": true, 00:22:35.101 "data_offset": 2048, 00:22:35.101 "data_size": 63488 00:22:35.101 }, 00:22:35.101 { 00:22:35.101 "name": "BaseBdev3", 00:22:35.101 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:35.101 "is_configured": true, 00:22:35.101 "data_offset": 2048, 00:22:35.101 "data_size": 63488 00:22:35.101 } 00:22:35.101 ] 00:22:35.101 }' 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.101 14:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:35.680 [2024-11-04 14:55:05.402554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:35.680 "name": "Existed_Raid", 00:22:35.680 "aliases": [ 00:22:35.680 "2bb77a4e-7704-4584-907d-d567ebadfe25" 00:22:35.680 ], 00:22:35.680 "product_name": "Raid Volume", 00:22:35.680 "block_size": 512, 00:22:35.680 "num_blocks": 126976, 00:22:35.680 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:35.680 "assigned_rate_limits": { 00:22:35.680 "rw_ios_per_sec": 0, 00:22:35.680 "rw_mbytes_per_sec": 0, 00:22:35.680 "r_mbytes_per_sec": 0, 00:22:35.680 "w_mbytes_per_sec": 0 00:22:35.680 }, 00:22:35.680 "claimed": false, 00:22:35.680 "zoned": false, 00:22:35.680 "supported_io_types": { 00:22:35.680 "read": true, 00:22:35.680 "write": true, 00:22:35.680 "unmap": false, 00:22:35.680 "flush": false, 00:22:35.680 "reset": true, 00:22:35.680 "nvme_admin": false, 00:22:35.680 "nvme_io": false, 00:22:35.680 "nvme_io_md": false, 00:22:35.680 "write_zeroes": true, 00:22:35.680 "zcopy": false, 00:22:35.680 "get_zone_info": false, 00:22:35.680 "zone_management": false, 00:22:35.680 "zone_append": false, 00:22:35.680 "compare": false, 00:22:35.680 "compare_and_write": false, 00:22:35.680 "abort": false, 00:22:35.680 "seek_hole": false, 00:22:35.680 "seek_data": false, 00:22:35.680 "copy": false, 00:22:35.680 "nvme_iov_md": false 00:22:35.680 }, 00:22:35.680 "driver_specific": { 00:22:35.680 "raid": { 00:22:35.680 "uuid": "2bb77a4e-7704-4584-907d-d567ebadfe25", 00:22:35.680 "strip_size_kb": 64, 00:22:35.680 "state": "online", 00:22:35.680 "raid_level": "raid5f", 00:22:35.680 "superblock": true, 00:22:35.680 "num_base_bdevs": 3, 00:22:35.680 "num_base_bdevs_discovered": 3, 00:22:35.680 "num_base_bdevs_operational": 3, 00:22:35.680 "base_bdevs_list": [ 00:22:35.680 { 00:22:35.680 "name": "NewBaseBdev", 00:22:35.680 "uuid": "aee5e883-8039-4fe5-a124-d5170d4c062d", 00:22:35.680 "is_configured": true, 00:22:35.680 "data_offset": 2048, 00:22:35.680 "data_size": 63488 00:22:35.680 }, 00:22:35.680 { 00:22:35.680 "name": "BaseBdev2", 00:22:35.680 "uuid": "4ce2fb2f-2bcc-4c48-b659-aa84e4b6bbd7", 00:22:35.680 "is_configured": true, 00:22:35.680 "data_offset": 2048, 00:22:35.680 "data_size": 63488 00:22:35.680 }, 00:22:35.680 { 00:22:35.680 "name": "BaseBdev3", 00:22:35.680 "uuid": "1cadf2d9-f79c-4871-a7c9-6063ba8bce6d", 00:22:35.680 "is_configured": true, 00:22:35.680 "data_offset": 2048, 00:22:35.680 "data_size": 63488 00:22:35.680 } 00:22:35.680 ] 00:22:35.680 } 00:22:35.680 } 00:22:35.680 }' 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:35.680 BaseBdev2 00:22:35.680 BaseBdev3' 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.938 [2024-11-04 14:55:05.706377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:35.938 [2024-11-04 14:55:05.706616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.938 [2024-11-04 14:55:05.706734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.938 [2024-11-04 14:55:05.707088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.938 [2024-11-04 14:55:05.707111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81011 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81011 ']' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 81011 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81011 00:22:35.938 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:35.938 killing process with pid 81011 00:22:35.939 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:35.939 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81011' 00:22:35.939 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 81011 00:22:35.939 [2024-11-04 14:55:05.747294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:35.939 14:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 81011 00:22:36.197 [2024-11-04 14:55:06.018734] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:37.572 ************************************ 00:22:37.572 END TEST raid5f_state_function_test_sb 00:22:37.572 ************************************ 00:22:37.572 14:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:37.572 00:22:37.572 real 0m12.396s 00:22:37.572 user 0m20.533s 00:22:37.572 sys 0m1.785s 00:22:37.572 14:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:37.572 14:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 14:55:07 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:37.572 14:55:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:37.572 14:55:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:37.572 14:55:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 ************************************ 00:22:37.572 START TEST raid5f_superblock_test 00:22:37.572 ************************************ 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81649 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81649 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81649 ']' 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:37.572 14:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 [2024-11-04 14:55:07.301698] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:22:37.572 [2024-11-04 14:55:07.301900] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81649 ] 00:22:37.830 [2024-11-04 14:55:07.496304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.830 [2024-11-04 14:55:07.659284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.088 [2024-11-04 14:55:07.889510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.088 [2024-11-04 14:55:07.889564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.654 malloc1 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.654 [2024-11-04 14:55:08.371853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:38.654 [2024-11-04 14:55:08.372153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.654 [2024-11-04 14:55:08.372245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:38.654 [2024-11-04 14:55:08.372385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.654 [2024-11-04 14:55:08.375211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.654 [2024-11-04 14:55:08.375272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:38.654 pt1 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.654 malloc2 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.654 [2024-11-04 14:55:08.424034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:38.654 [2024-11-04 14:55:08.424116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.654 [2024-11-04 14:55:08.424147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:38.654 [2024-11-04 14:55:08.424162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.654 [2024-11-04 14:55:08.426877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.654 [2024-11-04 14:55:08.427103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:38.654 pt2 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:38.654 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 malloc3 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 [2024-11-04 14:55:08.489932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:38.655 [2024-11-04 14:55:08.490007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.655 [2024-11-04 14:55:08.490040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:38.655 [2024-11-04 14:55:08.490055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.655 [2024-11-04 14:55:08.492849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.655 [2024-11-04 14:55:08.492895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:38.655 pt3 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 [2024-11-04 14:55:08.497991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:38.655 [2024-11-04 14:55:08.500488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:38.655 [2024-11-04 14:55:08.500632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:38.655 [2024-11-04 14:55:08.500877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:38.655 [2024-11-04 14:55:08.500924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:38.655 [2024-11-04 14:55:08.501217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:38.655 [2024-11-04 14:55:08.506727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:38.655 [2024-11-04 14:55:08.506861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:38.655 [2024-11-04 14:55:08.507251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.914 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.914 "name": "raid_bdev1", 00:22:38.914 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:38.914 "strip_size_kb": 64, 00:22:38.914 "state": "online", 00:22:38.914 "raid_level": "raid5f", 00:22:38.914 "superblock": true, 00:22:38.914 "num_base_bdevs": 3, 00:22:38.914 "num_base_bdevs_discovered": 3, 00:22:38.914 "num_base_bdevs_operational": 3, 00:22:38.914 "base_bdevs_list": [ 00:22:38.914 { 00:22:38.914 "name": "pt1", 00:22:38.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:38.914 "is_configured": true, 00:22:38.914 "data_offset": 2048, 00:22:38.914 "data_size": 63488 00:22:38.914 }, 00:22:38.914 { 00:22:38.914 "name": "pt2", 00:22:38.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.914 "is_configured": true, 00:22:38.914 "data_offset": 2048, 00:22:38.914 "data_size": 63488 00:22:38.914 }, 00:22:38.914 { 00:22:38.914 "name": "pt3", 00:22:38.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.914 "is_configured": true, 00:22:38.914 "data_offset": 2048, 00:22:38.914 "data_size": 63488 00:22:38.914 } 00:22:38.914 ] 00:22:38.914 }' 00:22:38.914 14:55:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.914 14:55:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.172 [2024-11-04 14:55:09.029718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.172 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:39.431 "name": "raid_bdev1", 00:22:39.431 "aliases": [ 00:22:39.431 "4029746b-165d-4ae9-91d4-55d6f130ce77" 00:22:39.431 ], 00:22:39.431 "product_name": "Raid Volume", 00:22:39.431 "block_size": 512, 00:22:39.431 "num_blocks": 126976, 00:22:39.431 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:39.431 "assigned_rate_limits": { 00:22:39.431 "rw_ios_per_sec": 0, 00:22:39.431 "rw_mbytes_per_sec": 0, 00:22:39.431 "r_mbytes_per_sec": 0, 00:22:39.431 "w_mbytes_per_sec": 0 00:22:39.431 }, 00:22:39.431 "claimed": false, 00:22:39.431 "zoned": false, 00:22:39.431 "supported_io_types": { 00:22:39.431 "read": true, 00:22:39.431 "write": true, 00:22:39.431 "unmap": false, 00:22:39.431 "flush": false, 00:22:39.431 "reset": true, 00:22:39.431 "nvme_admin": false, 00:22:39.431 "nvme_io": false, 00:22:39.431 "nvme_io_md": false, 00:22:39.431 "write_zeroes": true, 00:22:39.431 "zcopy": false, 00:22:39.431 "get_zone_info": false, 00:22:39.431 "zone_management": false, 00:22:39.431 "zone_append": false, 00:22:39.431 "compare": false, 00:22:39.431 "compare_and_write": false, 00:22:39.431 "abort": false, 00:22:39.431 "seek_hole": false, 00:22:39.431 "seek_data": false, 00:22:39.431 "copy": false, 00:22:39.431 "nvme_iov_md": false 00:22:39.431 }, 00:22:39.431 "driver_specific": { 00:22:39.431 "raid": { 00:22:39.431 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:39.431 "strip_size_kb": 64, 00:22:39.431 "state": "online", 00:22:39.431 "raid_level": "raid5f", 00:22:39.431 "superblock": true, 00:22:39.431 "num_base_bdevs": 3, 00:22:39.431 "num_base_bdevs_discovered": 3, 00:22:39.431 "num_base_bdevs_operational": 3, 00:22:39.431 "base_bdevs_list": [ 00:22:39.431 { 00:22:39.431 "name": "pt1", 00:22:39.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:39.431 "is_configured": true, 00:22:39.431 "data_offset": 2048, 00:22:39.431 "data_size": 63488 00:22:39.431 }, 00:22:39.431 { 00:22:39.431 "name": "pt2", 00:22:39.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.431 "is_configured": true, 00:22:39.431 "data_offset": 2048, 00:22:39.431 "data_size": 63488 00:22:39.431 }, 00:22:39.431 { 00:22:39.431 "name": "pt3", 00:22:39.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:39.431 "is_configured": true, 00:22:39.431 "data_offset": 2048, 00:22:39.431 "data_size": 63488 00:22:39.431 } 00:22:39.431 ] 00:22:39.431 } 00:22:39.431 } 00:22:39.431 }' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:39.431 pt2 00:22:39.431 pt3' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.431 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.689 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.689 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.689 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:39.689 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:39.689 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.689 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 [2024-11-04 14:55:09.349813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4029746b-165d-4ae9-91d4-55d6f130ce77 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4029746b-165d-4ae9-91d4-55d6f130ce77 ']' 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 [2024-11-04 14:55:09.397624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.690 [2024-11-04 14:55:09.397655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:39.690 [2024-11-04 14:55:09.397738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.690 [2024-11-04 14:55:09.397846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.690 [2024-11-04 14:55:09.397864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 [2024-11-04 14:55:09.537727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:39.690 [2024-11-04 14:55:09.540242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:39.690 [2024-11-04 14:55:09.540315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:39.690 [2024-11-04 14:55:09.540392] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:39.690 [2024-11-04 14:55:09.540463] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:39.690 [2024-11-04 14:55:09.540498] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:39.690 [2024-11-04 14:55:09.540527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.690 [2024-11-04 14:55:09.540542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:39.690 request: 00:22:39.690 { 00:22:39.690 "name": "raid_bdev1", 00:22:39.690 "raid_level": "raid5f", 00:22:39.690 "base_bdevs": [ 00:22:39.690 "malloc1", 00:22:39.690 "malloc2", 00:22:39.690 "malloc3" 00:22:39.690 ], 00:22:39.690 "strip_size_kb": 64, 00:22:39.690 "superblock": false, 00:22:39.690 "method": "bdev_raid_create", 00:22:39.690 "req_id": 1 00:22:39.690 } 00:22:39.690 Got JSON-RPC error response 00:22:39.690 response: 00:22:39.690 { 00:22:39.690 "code": -17, 00:22:39.690 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:39.690 } 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.949 [2024-11-04 14:55:09.605684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:39.949 [2024-11-04 14:55:09.605917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.949 [2024-11-04 14:55:09.605994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:39.949 [2024-11-04 14:55:09.606102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.949 [2024-11-04 14:55:09.608936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.949 [2024-11-04 14:55:09.609087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:39.949 [2024-11-04 14:55:09.609306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:39.949 [2024-11-04 14:55:09.609520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:39.949 pt1 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.949 "name": "raid_bdev1", 00:22:39.949 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:39.949 "strip_size_kb": 64, 00:22:39.949 "state": "configuring", 00:22:39.949 "raid_level": "raid5f", 00:22:39.949 "superblock": true, 00:22:39.949 "num_base_bdevs": 3, 00:22:39.949 "num_base_bdevs_discovered": 1, 00:22:39.949 "num_base_bdevs_operational": 3, 00:22:39.949 "base_bdevs_list": [ 00:22:39.949 { 00:22:39.949 "name": "pt1", 00:22:39.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:39.949 "is_configured": true, 00:22:39.949 "data_offset": 2048, 00:22:39.949 "data_size": 63488 00:22:39.949 }, 00:22:39.949 { 00:22:39.949 "name": null, 00:22:39.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:39.949 "is_configured": false, 00:22:39.949 "data_offset": 2048, 00:22:39.949 "data_size": 63488 00:22:39.949 }, 00:22:39.949 { 00:22:39.949 "name": null, 00:22:39.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:39.949 "is_configured": false, 00:22:39.949 "data_offset": 2048, 00:22:39.949 "data_size": 63488 00:22:39.949 } 00:22:39.949 ] 00:22:39.949 }' 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.949 14:55:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.516 [2024-11-04 14:55:10.134097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:40.516 [2024-11-04 14:55:10.134193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.516 [2024-11-04 14:55:10.134243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:40.516 [2024-11-04 14:55:10.134265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.516 [2024-11-04 14:55:10.134901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.516 [2024-11-04 14:55:10.135183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:40.516 [2024-11-04 14:55:10.135345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:40.516 [2024-11-04 14:55:10.135388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:40.516 pt2 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.516 [2024-11-04 14:55:10.142062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.516 "name": "raid_bdev1", 00:22:40.516 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:40.516 "strip_size_kb": 64, 00:22:40.516 "state": "configuring", 00:22:40.516 "raid_level": "raid5f", 00:22:40.516 "superblock": true, 00:22:40.516 "num_base_bdevs": 3, 00:22:40.516 "num_base_bdevs_discovered": 1, 00:22:40.516 "num_base_bdevs_operational": 3, 00:22:40.516 "base_bdevs_list": [ 00:22:40.516 { 00:22:40.516 "name": "pt1", 00:22:40.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:40.516 "is_configured": true, 00:22:40.516 "data_offset": 2048, 00:22:40.516 "data_size": 63488 00:22:40.516 }, 00:22:40.516 { 00:22:40.516 "name": null, 00:22:40.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:40.516 "is_configured": false, 00:22:40.516 "data_offset": 0, 00:22:40.516 "data_size": 63488 00:22:40.516 }, 00:22:40.516 { 00:22:40.516 "name": null, 00:22:40.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:40.516 "is_configured": false, 00:22:40.516 "data_offset": 2048, 00:22:40.516 "data_size": 63488 00:22:40.516 } 00:22:40.516 ] 00:22:40.516 }' 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.516 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.774 [2024-11-04 14:55:10.654193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:40.774 [2024-11-04 14:55:10.654286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.774 [2024-11-04 14:55:10.654312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:40.774 [2024-11-04 14:55:10.654338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.774 [2024-11-04 14:55:10.654884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.774 [2024-11-04 14:55:10.654916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:40.774 [2024-11-04 14:55:10.655009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:40.774 [2024-11-04 14:55:10.655044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:40.774 pt2 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.774 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 [2024-11-04 14:55:10.666173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:41.032 [2024-11-04 14:55:10.666251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.032 [2024-11-04 14:55:10.666273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:41.032 [2024-11-04 14:55:10.666289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.032 [2024-11-04 14:55:10.666839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.032 [2024-11-04 14:55:10.666899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:41.032 [2024-11-04 14:55:10.667013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:41.032 [2024-11-04 14:55:10.667045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:41.032 [2024-11-04 14:55:10.667195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:41.032 [2024-11-04 14:55:10.667216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:41.032 [2024-11-04 14:55:10.667566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:41.032 [2024-11-04 14:55:10.672805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:41.032 [2024-11-04 14:55:10.672832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:41.032 [2024-11-04 14:55:10.673102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.032 pt3 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.032 "name": "raid_bdev1", 00:22:41.032 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:41.032 "strip_size_kb": 64, 00:22:41.032 "state": "online", 00:22:41.032 "raid_level": "raid5f", 00:22:41.032 "superblock": true, 00:22:41.032 "num_base_bdevs": 3, 00:22:41.032 "num_base_bdevs_discovered": 3, 00:22:41.032 "num_base_bdevs_operational": 3, 00:22:41.032 "base_bdevs_list": [ 00:22:41.032 { 00:22:41.032 "name": "pt1", 00:22:41.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.032 "is_configured": true, 00:22:41.032 "data_offset": 2048, 00:22:41.032 "data_size": 63488 00:22:41.032 }, 00:22:41.032 { 00:22:41.032 "name": "pt2", 00:22:41.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.032 "is_configured": true, 00:22:41.032 "data_offset": 2048, 00:22:41.032 "data_size": 63488 00:22:41.032 }, 00:22:41.032 { 00:22:41.032 "name": "pt3", 00:22:41.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.032 "is_configured": true, 00:22:41.032 "data_offset": 2048, 00:22:41.032 "data_size": 63488 00:22:41.032 } 00:22:41.032 ] 00:22:41.032 }' 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.032 14:55:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.598 [2024-11-04 14:55:11.191596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.598 "name": "raid_bdev1", 00:22:41.598 "aliases": [ 00:22:41.598 "4029746b-165d-4ae9-91d4-55d6f130ce77" 00:22:41.598 ], 00:22:41.598 "product_name": "Raid Volume", 00:22:41.598 "block_size": 512, 00:22:41.598 "num_blocks": 126976, 00:22:41.598 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:41.598 "assigned_rate_limits": { 00:22:41.598 "rw_ios_per_sec": 0, 00:22:41.598 "rw_mbytes_per_sec": 0, 00:22:41.598 "r_mbytes_per_sec": 0, 00:22:41.598 "w_mbytes_per_sec": 0 00:22:41.598 }, 00:22:41.598 "claimed": false, 00:22:41.598 "zoned": false, 00:22:41.598 "supported_io_types": { 00:22:41.598 "read": true, 00:22:41.598 "write": true, 00:22:41.598 "unmap": false, 00:22:41.598 "flush": false, 00:22:41.598 "reset": true, 00:22:41.598 "nvme_admin": false, 00:22:41.598 "nvme_io": false, 00:22:41.598 "nvme_io_md": false, 00:22:41.598 "write_zeroes": true, 00:22:41.598 "zcopy": false, 00:22:41.598 "get_zone_info": false, 00:22:41.598 "zone_management": false, 00:22:41.598 "zone_append": false, 00:22:41.598 "compare": false, 00:22:41.598 "compare_and_write": false, 00:22:41.598 "abort": false, 00:22:41.598 "seek_hole": false, 00:22:41.598 "seek_data": false, 00:22:41.598 "copy": false, 00:22:41.598 "nvme_iov_md": false 00:22:41.598 }, 00:22:41.598 "driver_specific": { 00:22:41.598 "raid": { 00:22:41.598 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:41.598 "strip_size_kb": 64, 00:22:41.598 "state": "online", 00:22:41.598 "raid_level": "raid5f", 00:22:41.598 "superblock": true, 00:22:41.598 "num_base_bdevs": 3, 00:22:41.598 "num_base_bdevs_discovered": 3, 00:22:41.598 "num_base_bdevs_operational": 3, 00:22:41.598 "base_bdevs_list": [ 00:22:41.598 { 00:22:41.598 "name": "pt1", 00:22:41.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.598 "is_configured": true, 00:22:41.598 "data_offset": 2048, 00:22:41.598 "data_size": 63488 00:22:41.598 }, 00:22:41.598 { 00:22:41.598 "name": "pt2", 00:22:41.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.598 "is_configured": true, 00:22:41.598 "data_offset": 2048, 00:22:41.598 "data_size": 63488 00:22:41.598 }, 00:22:41.598 { 00:22:41.598 "name": "pt3", 00:22:41.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.598 "is_configured": true, 00:22:41.598 "data_offset": 2048, 00:22:41.598 "data_size": 63488 00:22:41.598 } 00:22:41.598 ] 00:22:41.598 } 00:22:41.598 } 00:22:41.598 }' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:41.598 pt2 00:22:41.598 pt3' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.598 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:41.856 [2024-11-04 14:55:11.511518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4029746b-165d-4ae9-91d4-55d6f130ce77 '!=' 4029746b-165d-4ae9-91d4-55d6f130ce77 ']' 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.856 [2024-11-04 14:55:11.591381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.856 "name": "raid_bdev1", 00:22:41.856 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:41.856 "strip_size_kb": 64, 00:22:41.856 "state": "online", 00:22:41.856 "raid_level": "raid5f", 00:22:41.856 "superblock": true, 00:22:41.856 "num_base_bdevs": 3, 00:22:41.856 "num_base_bdevs_discovered": 2, 00:22:41.856 "num_base_bdevs_operational": 2, 00:22:41.856 "base_bdevs_list": [ 00:22:41.856 { 00:22:41.856 "name": null, 00:22:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.856 "is_configured": false, 00:22:41.856 "data_offset": 0, 00:22:41.856 "data_size": 63488 00:22:41.856 }, 00:22:41.856 { 00:22:41.856 "name": "pt2", 00:22:41.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.856 "is_configured": true, 00:22:41.856 "data_offset": 2048, 00:22:41.856 "data_size": 63488 00:22:41.856 }, 00:22:41.856 { 00:22:41.856 "name": "pt3", 00:22:41.856 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.856 "is_configured": true, 00:22:41.856 "data_offset": 2048, 00:22:41.856 "data_size": 63488 00:22:41.856 } 00:22:41.856 ] 00:22:41.856 }' 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.856 14:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 [2024-11-04 14:55:12.123639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:42.426 [2024-11-04 14:55:12.123712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.426 [2024-11-04 14:55:12.123797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.426 [2024-11-04 14:55:12.123886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.426 [2024-11-04 14:55:12.123925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 [2024-11-04 14:55:12.199621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.426 [2024-11-04 14:55:12.199687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.426 [2024-11-04 14:55:12.199711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:42.426 [2024-11-04 14:55:12.199728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.426 [2024-11-04 14:55:12.202735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.426 [2024-11-04 14:55:12.202778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.426 [2024-11-04 14:55:12.202869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:42.426 [2024-11-04 14:55:12.202932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:42.426 pt2 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.426 "name": "raid_bdev1", 00:22:42.426 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:42.426 "strip_size_kb": 64, 00:22:42.426 "state": "configuring", 00:22:42.426 "raid_level": "raid5f", 00:22:42.426 "superblock": true, 00:22:42.426 "num_base_bdevs": 3, 00:22:42.426 "num_base_bdevs_discovered": 1, 00:22:42.426 "num_base_bdevs_operational": 2, 00:22:42.426 "base_bdevs_list": [ 00:22:42.426 { 00:22:42.426 "name": null, 00:22:42.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.426 "is_configured": false, 00:22:42.426 "data_offset": 2048, 00:22:42.426 "data_size": 63488 00:22:42.426 }, 00:22:42.426 { 00:22:42.426 "name": "pt2", 00:22:42.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.426 "is_configured": true, 00:22:42.426 "data_offset": 2048, 00:22:42.426 "data_size": 63488 00:22:42.426 }, 00:22:42.426 { 00:22:42.426 "name": null, 00:22:42.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:42.426 "is_configured": false, 00:22:42.426 "data_offset": 2048, 00:22:42.426 "data_size": 63488 00:22:42.426 } 00:22:42.426 ] 00:22:42.426 }' 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.426 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 [2024-11-04 14:55:12.719899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:42.992 [2024-11-04 14:55:12.719988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.992 [2024-11-04 14:55:12.720022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:42.992 [2024-11-04 14:55:12.720040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.992 [2024-11-04 14:55:12.720798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.992 [2024-11-04 14:55:12.720832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:42.992 [2024-11-04 14:55:12.720930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:42.992 [2024-11-04 14:55:12.720992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:42.992 [2024-11-04 14:55:12.721155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:42.992 [2024-11-04 14:55:12.721177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:42.992 [2024-11-04 14:55:12.721557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:42.992 [2024-11-04 14:55:12.726931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:42.992 [2024-11-04 14:55:12.726975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:42.992 [2024-11-04 14:55:12.727437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.992 pt3 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.992 "name": "raid_bdev1", 00:22:42.992 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:42.992 "strip_size_kb": 64, 00:22:42.992 "state": "online", 00:22:42.992 "raid_level": "raid5f", 00:22:42.992 "superblock": true, 00:22:42.992 "num_base_bdevs": 3, 00:22:42.992 "num_base_bdevs_discovered": 2, 00:22:42.992 "num_base_bdevs_operational": 2, 00:22:42.992 "base_bdevs_list": [ 00:22:42.992 { 00:22:42.992 "name": null, 00:22:42.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.992 "is_configured": false, 00:22:42.992 "data_offset": 2048, 00:22:42.992 "data_size": 63488 00:22:42.992 }, 00:22:42.992 { 00:22:42.992 "name": "pt2", 00:22:42.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.992 "is_configured": true, 00:22:42.992 "data_offset": 2048, 00:22:42.992 "data_size": 63488 00:22:42.992 }, 00:22:42.992 { 00:22:42.992 "name": "pt3", 00:22:42.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:42.992 "is_configured": true, 00:22:42.992 "data_offset": 2048, 00:22:42.992 "data_size": 63488 00:22:42.992 } 00:22:42.992 ] 00:22:42.992 }' 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.992 14:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.560 [2024-11-04 14:55:13.225862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.560 [2024-11-04 14:55:13.225954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.560 [2024-11-04 14:55:13.226063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.560 [2024-11-04 14:55:13.226179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.560 [2024-11-04 14:55:13.226209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:43.560 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.561 [2024-11-04 14:55:13.293875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:43.561 [2024-11-04 14:55:13.293948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.561 [2024-11-04 14:55:13.293975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:43.561 [2024-11-04 14:55:13.293988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.561 [2024-11-04 14:55:13.297237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.561 [2024-11-04 14:55:13.297296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:43.561 [2024-11-04 14:55:13.297391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:43.561 [2024-11-04 14:55:13.297447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:43.561 [2024-11-04 14:55:13.297626] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:43.561 [2024-11-04 14:55:13.297644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.561 [2024-11-04 14:55:13.297666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:43.561 [2024-11-04 14:55:13.297736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:43.561 pt1 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.561 "name": "raid_bdev1", 00:22:43.561 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:43.561 "strip_size_kb": 64, 00:22:43.561 "state": "configuring", 00:22:43.561 "raid_level": "raid5f", 00:22:43.561 "superblock": true, 00:22:43.561 "num_base_bdevs": 3, 00:22:43.561 "num_base_bdevs_discovered": 1, 00:22:43.561 "num_base_bdevs_operational": 2, 00:22:43.561 "base_bdevs_list": [ 00:22:43.561 { 00:22:43.561 "name": null, 00:22:43.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.561 "is_configured": false, 00:22:43.561 "data_offset": 2048, 00:22:43.561 "data_size": 63488 00:22:43.561 }, 00:22:43.561 { 00:22:43.561 "name": "pt2", 00:22:43.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.561 "is_configured": true, 00:22:43.561 "data_offset": 2048, 00:22:43.561 "data_size": 63488 00:22:43.561 }, 00:22:43.561 { 00:22:43.561 "name": null, 00:22:43.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:43.561 "is_configured": false, 00:22:43.561 "data_offset": 2048, 00:22:43.561 "data_size": 63488 00:22:43.561 } 00:22:43.561 ] 00:22:43.561 }' 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.561 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.127 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:44.127 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:44.127 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.127 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.127 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.128 [2024-11-04 14:55:13.882062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:44.128 [2024-11-04 14:55:13.882153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.128 [2024-11-04 14:55:13.882185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:44.128 [2024-11-04 14:55:13.882200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.128 [2024-11-04 14:55:13.882816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.128 [2024-11-04 14:55:13.882849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:44.128 [2024-11-04 14:55:13.882969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:44.128 [2024-11-04 14:55:13.883000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:44.128 [2024-11-04 14:55:13.883161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:44.128 [2024-11-04 14:55:13.883177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:44.128 [2024-11-04 14:55:13.883496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:44.128 [2024-11-04 14:55:13.888446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:44.128 [2024-11-04 14:55:13.888480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:44.128 pt3 00:22:44.128 [2024-11-04 14:55:13.888775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.128 "name": "raid_bdev1", 00:22:44.128 "uuid": "4029746b-165d-4ae9-91d4-55d6f130ce77", 00:22:44.128 "strip_size_kb": 64, 00:22:44.128 "state": "online", 00:22:44.128 "raid_level": "raid5f", 00:22:44.128 "superblock": true, 00:22:44.128 "num_base_bdevs": 3, 00:22:44.128 "num_base_bdevs_discovered": 2, 00:22:44.128 "num_base_bdevs_operational": 2, 00:22:44.128 "base_bdevs_list": [ 00:22:44.128 { 00:22:44.128 "name": null, 00:22:44.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.128 "is_configured": false, 00:22:44.128 "data_offset": 2048, 00:22:44.128 "data_size": 63488 00:22:44.128 }, 00:22:44.128 { 00:22:44.128 "name": "pt2", 00:22:44.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.128 "is_configured": true, 00:22:44.128 "data_offset": 2048, 00:22:44.128 "data_size": 63488 00:22:44.128 }, 00:22:44.128 { 00:22:44.128 "name": "pt3", 00:22:44.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:44.128 "is_configured": true, 00:22:44.128 "data_offset": 2048, 00:22:44.128 "data_size": 63488 00:22:44.128 } 00:22:44.128 ] 00:22:44.128 }' 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.128 14:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.693 [2024-11-04 14:55:14.463299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4029746b-165d-4ae9-91d4-55d6f130ce77 '!=' 4029746b-165d-4ae9-91d4-55d6f130ce77 ']' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81649 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81649 ']' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81649 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81649 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:44.693 killing process with pid 81649 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81649' 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81649 00:22:44.693 [2024-11-04 14:55:14.540147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:44.693 14:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81649 00:22:44.693 [2024-11-04 14:55:14.540340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.693 [2024-11-04 14:55:14.540437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.693 [2024-11-04 14:55:14.540474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:44.950 [2024-11-04 14:55:14.824140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.324 14:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:46.324 00:22:46.324 real 0m8.748s 00:22:46.324 user 0m14.232s 00:22:46.324 sys 0m1.280s 00:22:46.324 14:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:46.324 ************************************ 00:22:46.324 END TEST raid5f_superblock_test 00:22:46.324 ************************************ 00:22:46.324 14:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.324 14:55:15 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:22:46.324 14:55:15 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:22:46.324 14:55:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:46.324 14:55:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:46.324 14:55:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.324 ************************************ 00:22:46.324 START TEST raid5f_rebuild_test 00:22:46.324 ************************************ 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:46.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82094 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82094 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82094 ']' 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:46.324 14:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.324 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:46.324 Zero copy mechanism will not be used. 00:22:46.324 [2024-11-04 14:55:16.117303] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:22:46.324 [2024-11-04 14:55:16.117513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82094 ] 00:22:46.582 [2024-11-04 14:55:16.312640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.582 [2024-11-04 14:55:16.468047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.840 [2024-11-04 14:55:16.688896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:46.840 [2024-11-04 14:55:16.688968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 BaseBdev1_malloc 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 [2024-11-04 14:55:17.122033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:47.406 [2024-11-04 14:55:17.122152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.406 [2024-11-04 14:55:17.122188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:47.406 [2024-11-04 14:55:17.122209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.406 [2024-11-04 14:55:17.125208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.406 [2024-11-04 14:55:17.125276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:47.406 BaseBdev1 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 BaseBdev2_malloc 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 [2024-11-04 14:55:17.182477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:47.406 [2024-11-04 14:55:17.182921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.406 [2024-11-04 14:55:17.182960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:47.406 [2024-11-04 14:55:17.182982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.406 [2024-11-04 14:55:17.186047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.406 [2024-11-04 14:55:17.186239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:47.406 BaseBdev2 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 BaseBdev3_malloc 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.406 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.406 [2024-11-04 14:55:17.253955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:47.406 [2024-11-04 14:55:17.254257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.406 [2024-11-04 14:55:17.254345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:47.406 [2024-11-04 14:55:17.254510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.406 [2024-11-04 14:55:17.257884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.406 [2024-11-04 14:55:17.257937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:47.406 BaseBdev3 00:22:47.407 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.407 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:47.407 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.407 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.665 spare_malloc 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.665 spare_delay 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.665 [2024-11-04 14:55:17.317934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:47.665 [2024-11-04 14:55:17.318006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.665 [2024-11-04 14:55:17.318034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:47.665 [2024-11-04 14:55:17.318053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.665 [2024-11-04 14:55:17.321679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.665 [2024-11-04 14:55:17.321859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:47.665 spare 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.665 [2024-11-04 14:55:17.330267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.665 [2024-11-04 14:55:17.333335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.665 [2024-11-04 14:55:17.333624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.665 [2024-11-04 14:55:17.333805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:47.665 [2024-11-04 14:55:17.333864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:47.665 [2024-11-04 14:55:17.334320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:47.665 [2024-11-04 14:55:17.340199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:47.665 [2024-11-04 14:55:17.340429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:47.665 [2024-11-04 14:55:17.340807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.665 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.666 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.666 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.666 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.666 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.666 "name": "raid_bdev1", 00:22:47.666 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:47.666 "strip_size_kb": 64, 00:22:47.666 "state": "online", 00:22:47.666 "raid_level": "raid5f", 00:22:47.666 "superblock": false, 00:22:47.666 "num_base_bdevs": 3, 00:22:47.666 "num_base_bdevs_discovered": 3, 00:22:47.666 "num_base_bdevs_operational": 3, 00:22:47.666 "base_bdevs_list": [ 00:22:47.666 { 00:22:47.666 "name": "BaseBdev1", 00:22:47.666 "uuid": "7def6fe9-d3dd-56d8-8596-3cb8f3949649", 00:22:47.666 "is_configured": true, 00:22:47.666 "data_offset": 0, 00:22:47.666 "data_size": 65536 00:22:47.666 }, 00:22:47.666 { 00:22:47.666 "name": "BaseBdev2", 00:22:47.666 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:47.666 "is_configured": true, 00:22:47.666 "data_offset": 0, 00:22:47.666 "data_size": 65536 00:22:47.666 }, 00:22:47.666 { 00:22:47.666 "name": "BaseBdev3", 00:22:47.666 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:47.666 "is_configured": true, 00:22:47.666 "data_offset": 0, 00:22:47.666 "data_size": 65536 00:22:47.666 } 00:22:47.666 ] 00:22:47.666 }' 00:22:47.666 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.666 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.236 [2024-11-04 14:55:17.879760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:48.236 14:55:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:48.501 [2024-11-04 14:55:18.207817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:48.501 /dev/nbd0 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:48.501 1+0 records in 00:22:48.501 1+0 records out 00:22:48.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638703 s, 6.4 MB/s 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:48.501 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:49.067 512+0 records in 00:22:49.067 512+0 records out 00:22:49.067 67108864 bytes (67 MB, 64 MiB) copied, 0.51492 s, 130 MB/s 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.067 14:55:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:49.325 [2024-11-04 14:55:19.136595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.325 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:49.325 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.326 [2024-11-04 14:55:19.174479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.326 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.584 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.584 "name": "raid_bdev1", 00:22:49.584 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:49.584 "strip_size_kb": 64, 00:22:49.584 "state": "online", 00:22:49.584 "raid_level": "raid5f", 00:22:49.584 "superblock": false, 00:22:49.584 "num_base_bdevs": 3, 00:22:49.584 "num_base_bdevs_discovered": 2, 00:22:49.584 "num_base_bdevs_operational": 2, 00:22:49.584 "base_bdevs_list": [ 00:22:49.584 { 00:22:49.584 "name": null, 00:22:49.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.584 "is_configured": false, 00:22:49.584 "data_offset": 0, 00:22:49.584 "data_size": 65536 00:22:49.584 }, 00:22:49.584 { 00:22:49.584 "name": "BaseBdev2", 00:22:49.584 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:49.584 "is_configured": true, 00:22:49.584 "data_offset": 0, 00:22:49.584 "data_size": 65536 00:22:49.584 }, 00:22:49.584 { 00:22:49.584 "name": "BaseBdev3", 00:22:49.584 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:49.584 "is_configured": true, 00:22:49.584 "data_offset": 0, 00:22:49.584 "data_size": 65536 00:22:49.584 } 00:22:49.584 ] 00:22:49.584 }' 00:22:49.584 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.584 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.843 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:49.843 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.843 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.843 [2024-11-04 14:55:19.694849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:49.843 [2024-11-04 14:55:19.711637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:22:49.843 14:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.843 14:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:49.843 [2024-11-04 14:55:19.719104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.217 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.217 "name": "raid_bdev1", 00:22:51.217 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:51.217 "strip_size_kb": 64, 00:22:51.217 "state": "online", 00:22:51.217 "raid_level": "raid5f", 00:22:51.217 "superblock": false, 00:22:51.217 "num_base_bdevs": 3, 00:22:51.217 "num_base_bdevs_discovered": 3, 00:22:51.217 "num_base_bdevs_operational": 3, 00:22:51.217 "process": { 00:22:51.217 "type": "rebuild", 00:22:51.217 "target": "spare", 00:22:51.217 "progress": { 00:22:51.217 "blocks": 18432, 00:22:51.217 "percent": 14 00:22:51.217 } 00:22:51.217 }, 00:22:51.217 "base_bdevs_list": [ 00:22:51.217 { 00:22:51.217 "name": "spare", 00:22:51.217 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:51.217 "is_configured": true, 00:22:51.217 "data_offset": 0, 00:22:51.217 "data_size": 65536 00:22:51.217 }, 00:22:51.217 { 00:22:51.217 "name": "BaseBdev2", 00:22:51.217 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:51.217 "is_configured": true, 00:22:51.217 "data_offset": 0, 00:22:51.217 "data_size": 65536 00:22:51.217 }, 00:22:51.217 { 00:22:51.217 "name": "BaseBdev3", 00:22:51.217 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:51.217 "is_configured": true, 00:22:51.217 "data_offset": 0, 00:22:51.217 "data_size": 65536 00:22:51.218 } 00:22:51.218 ] 00:22:51.218 }' 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.218 [2024-11-04 14:55:20.890078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.218 [2024-11-04 14:55:20.934326] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:51.218 [2024-11-04 14:55:20.934455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.218 [2024-11-04 14:55:20.934499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.218 [2024-11-04 14:55:20.934510] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.218 14:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.218 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.218 "name": "raid_bdev1", 00:22:51.218 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:51.218 "strip_size_kb": 64, 00:22:51.218 "state": "online", 00:22:51.218 "raid_level": "raid5f", 00:22:51.218 "superblock": false, 00:22:51.218 "num_base_bdevs": 3, 00:22:51.218 "num_base_bdevs_discovered": 2, 00:22:51.218 "num_base_bdevs_operational": 2, 00:22:51.218 "base_bdevs_list": [ 00:22:51.218 { 00:22:51.218 "name": null, 00:22:51.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.218 "is_configured": false, 00:22:51.218 "data_offset": 0, 00:22:51.218 "data_size": 65536 00:22:51.218 }, 00:22:51.218 { 00:22:51.218 "name": "BaseBdev2", 00:22:51.218 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:51.218 "is_configured": true, 00:22:51.218 "data_offset": 0, 00:22:51.218 "data_size": 65536 00:22:51.218 }, 00:22:51.218 { 00:22:51.218 "name": "BaseBdev3", 00:22:51.218 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:51.218 "is_configured": true, 00:22:51.218 "data_offset": 0, 00:22:51.218 "data_size": 65536 00:22:51.218 } 00:22:51.218 ] 00:22:51.218 }' 00:22:51.218 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.218 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.784 "name": "raid_bdev1", 00:22:51.784 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:51.784 "strip_size_kb": 64, 00:22:51.784 "state": "online", 00:22:51.784 "raid_level": "raid5f", 00:22:51.784 "superblock": false, 00:22:51.784 "num_base_bdevs": 3, 00:22:51.784 "num_base_bdevs_discovered": 2, 00:22:51.784 "num_base_bdevs_operational": 2, 00:22:51.784 "base_bdevs_list": [ 00:22:51.784 { 00:22:51.784 "name": null, 00:22:51.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.784 "is_configured": false, 00:22:51.784 "data_offset": 0, 00:22:51.784 "data_size": 65536 00:22:51.784 }, 00:22:51.784 { 00:22:51.784 "name": "BaseBdev2", 00:22:51.784 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:51.784 "is_configured": true, 00:22:51.784 "data_offset": 0, 00:22:51.784 "data_size": 65536 00:22:51.784 }, 00:22:51.784 { 00:22:51.784 "name": "BaseBdev3", 00:22:51.784 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:51.784 "is_configured": true, 00:22:51.784 "data_offset": 0, 00:22:51.784 "data_size": 65536 00:22:51.784 } 00:22:51.784 ] 00:22:51.784 }' 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.784 [2024-11-04 14:55:21.648355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:51.784 [2024-11-04 14:55:21.662838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.784 14:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:51.784 [2024-11-04 14:55:21.670147] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.157 "name": "raid_bdev1", 00:22:53.157 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:53.157 "strip_size_kb": 64, 00:22:53.157 "state": "online", 00:22:53.157 "raid_level": "raid5f", 00:22:53.157 "superblock": false, 00:22:53.157 "num_base_bdevs": 3, 00:22:53.157 "num_base_bdevs_discovered": 3, 00:22:53.157 "num_base_bdevs_operational": 3, 00:22:53.157 "process": { 00:22:53.157 "type": "rebuild", 00:22:53.157 "target": "spare", 00:22:53.157 "progress": { 00:22:53.157 "blocks": 18432, 00:22:53.157 "percent": 14 00:22:53.157 } 00:22:53.157 }, 00:22:53.157 "base_bdevs_list": [ 00:22:53.157 { 00:22:53.157 "name": "spare", 00:22:53.157 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:53.157 "is_configured": true, 00:22:53.157 "data_offset": 0, 00:22:53.157 "data_size": 65536 00:22:53.157 }, 00:22:53.157 { 00:22:53.157 "name": "BaseBdev2", 00:22:53.157 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:53.157 "is_configured": true, 00:22:53.157 "data_offset": 0, 00:22:53.157 "data_size": 65536 00:22:53.157 }, 00:22:53.157 { 00:22:53.157 "name": "BaseBdev3", 00:22:53.157 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:53.157 "is_configured": true, 00:22:53.157 "data_offset": 0, 00:22:53.157 "data_size": 65536 00:22:53.157 } 00:22:53.157 ] 00:22:53.157 }' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=604 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.157 "name": "raid_bdev1", 00:22:53.157 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:53.157 "strip_size_kb": 64, 00:22:53.157 "state": "online", 00:22:53.157 "raid_level": "raid5f", 00:22:53.157 "superblock": false, 00:22:53.157 "num_base_bdevs": 3, 00:22:53.157 "num_base_bdevs_discovered": 3, 00:22:53.157 "num_base_bdevs_operational": 3, 00:22:53.157 "process": { 00:22:53.157 "type": "rebuild", 00:22:53.157 "target": "spare", 00:22:53.157 "progress": { 00:22:53.157 "blocks": 22528, 00:22:53.157 "percent": 17 00:22:53.157 } 00:22:53.157 }, 00:22:53.157 "base_bdevs_list": [ 00:22:53.157 { 00:22:53.157 "name": "spare", 00:22:53.157 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:53.157 "is_configured": true, 00:22:53.157 "data_offset": 0, 00:22:53.157 "data_size": 65536 00:22:53.157 }, 00:22:53.157 { 00:22:53.157 "name": "BaseBdev2", 00:22:53.157 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:53.157 "is_configured": true, 00:22:53.157 "data_offset": 0, 00:22:53.157 "data_size": 65536 00:22:53.157 }, 00:22:53.157 { 00:22:53.157 "name": "BaseBdev3", 00:22:53.157 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:53.157 "is_configured": true, 00:22:53.157 "data_offset": 0, 00:22:53.157 "data_size": 65536 00:22:53.157 } 00:22:53.157 ] 00:22:53.157 }' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.157 14:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.533 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.534 14:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.534 14:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.534 14:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.534 "name": "raid_bdev1", 00:22:54.534 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:54.534 "strip_size_kb": 64, 00:22:54.534 "state": "online", 00:22:54.534 "raid_level": "raid5f", 00:22:54.534 "superblock": false, 00:22:54.534 "num_base_bdevs": 3, 00:22:54.534 "num_base_bdevs_discovered": 3, 00:22:54.534 "num_base_bdevs_operational": 3, 00:22:54.534 "process": { 00:22:54.534 "type": "rebuild", 00:22:54.534 "target": "spare", 00:22:54.534 "progress": { 00:22:54.534 "blocks": 45056, 00:22:54.534 "percent": 34 00:22:54.534 } 00:22:54.534 }, 00:22:54.534 "base_bdevs_list": [ 00:22:54.534 { 00:22:54.534 "name": "spare", 00:22:54.534 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:54.534 "is_configured": true, 00:22:54.534 "data_offset": 0, 00:22:54.534 "data_size": 65536 00:22:54.534 }, 00:22:54.534 { 00:22:54.534 "name": "BaseBdev2", 00:22:54.534 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:54.534 "is_configured": true, 00:22:54.534 "data_offset": 0, 00:22:54.534 "data_size": 65536 00:22:54.534 }, 00:22:54.534 { 00:22:54.534 "name": "BaseBdev3", 00:22:54.534 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:54.534 "is_configured": true, 00:22:54.534 "data_offset": 0, 00:22:54.534 "data_size": 65536 00:22:54.534 } 00:22:54.534 ] 00:22:54.534 }' 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.534 14:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.467 "name": "raid_bdev1", 00:22:55.467 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:55.467 "strip_size_kb": 64, 00:22:55.467 "state": "online", 00:22:55.467 "raid_level": "raid5f", 00:22:55.467 "superblock": false, 00:22:55.467 "num_base_bdevs": 3, 00:22:55.467 "num_base_bdevs_discovered": 3, 00:22:55.467 "num_base_bdevs_operational": 3, 00:22:55.467 "process": { 00:22:55.467 "type": "rebuild", 00:22:55.467 "target": "spare", 00:22:55.467 "progress": { 00:22:55.467 "blocks": 69632, 00:22:55.467 "percent": 53 00:22:55.467 } 00:22:55.467 }, 00:22:55.467 "base_bdevs_list": [ 00:22:55.467 { 00:22:55.467 "name": "spare", 00:22:55.467 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:55.467 "is_configured": true, 00:22:55.467 "data_offset": 0, 00:22:55.467 "data_size": 65536 00:22:55.467 }, 00:22:55.467 { 00:22:55.467 "name": "BaseBdev2", 00:22:55.467 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:55.467 "is_configured": true, 00:22:55.467 "data_offset": 0, 00:22:55.467 "data_size": 65536 00:22:55.467 }, 00:22:55.467 { 00:22:55.467 "name": "BaseBdev3", 00:22:55.467 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:55.467 "is_configured": true, 00:22:55.467 "data_offset": 0, 00:22:55.467 "data_size": 65536 00:22:55.467 } 00:22:55.467 ] 00:22:55.467 }' 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.467 14:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:56.843 "name": "raid_bdev1", 00:22:56.843 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:56.843 "strip_size_kb": 64, 00:22:56.843 "state": "online", 00:22:56.843 "raid_level": "raid5f", 00:22:56.843 "superblock": false, 00:22:56.843 "num_base_bdevs": 3, 00:22:56.843 "num_base_bdevs_discovered": 3, 00:22:56.843 "num_base_bdevs_operational": 3, 00:22:56.843 "process": { 00:22:56.843 "type": "rebuild", 00:22:56.843 "target": "spare", 00:22:56.843 "progress": { 00:22:56.843 "blocks": 92160, 00:22:56.843 "percent": 70 00:22:56.843 } 00:22:56.843 }, 00:22:56.843 "base_bdevs_list": [ 00:22:56.843 { 00:22:56.843 "name": "spare", 00:22:56.843 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:56.843 "is_configured": true, 00:22:56.843 "data_offset": 0, 00:22:56.843 "data_size": 65536 00:22:56.843 }, 00:22:56.843 { 00:22:56.843 "name": "BaseBdev2", 00:22:56.843 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:56.843 "is_configured": true, 00:22:56.843 "data_offset": 0, 00:22:56.843 "data_size": 65536 00:22:56.843 }, 00:22:56.843 { 00:22:56.843 "name": "BaseBdev3", 00:22:56.843 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:56.843 "is_configured": true, 00:22:56.843 "data_offset": 0, 00:22:56.843 "data_size": 65536 00:22:56.843 } 00:22:56.843 ] 00:22:56.843 }' 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.843 14:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.776 14:55:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.777 "name": "raid_bdev1", 00:22:57.777 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:57.777 "strip_size_kb": 64, 00:22:57.777 "state": "online", 00:22:57.777 "raid_level": "raid5f", 00:22:57.777 "superblock": false, 00:22:57.777 "num_base_bdevs": 3, 00:22:57.777 "num_base_bdevs_discovered": 3, 00:22:57.777 "num_base_bdevs_operational": 3, 00:22:57.777 "process": { 00:22:57.777 "type": "rebuild", 00:22:57.777 "target": "spare", 00:22:57.777 "progress": { 00:22:57.777 "blocks": 116736, 00:22:57.777 "percent": 89 00:22:57.777 } 00:22:57.777 }, 00:22:57.777 "base_bdevs_list": [ 00:22:57.777 { 00:22:57.777 "name": "spare", 00:22:57.777 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:57.777 "is_configured": true, 00:22:57.777 "data_offset": 0, 00:22:57.777 "data_size": 65536 00:22:57.777 }, 00:22:57.777 { 00:22:57.777 "name": "BaseBdev2", 00:22:57.777 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:57.777 "is_configured": true, 00:22:57.777 "data_offset": 0, 00:22:57.777 "data_size": 65536 00:22:57.777 }, 00:22:57.777 { 00:22:57.777 "name": "BaseBdev3", 00:22:57.777 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:57.777 "is_configured": true, 00:22:57.777 "data_offset": 0, 00:22:57.777 "data_size": 65536 00:22:57.777 } 00:22:57.777 ] 00:22:57.777 }' 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.777 14:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:58.342 [2024-11-04 14:55:28.147889] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:58.342 [2024-11-04 14:55:28.148008] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:58.342 [2024-11-04 14:55:28.148080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.909 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.909 "name": "raid_bdev1", 00:22:58.909 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:58.909 "strip_size_kb": 64, 00:22:58.909 "state": "online", 00:22:58.909 "raid_level": "raid5f", 00:22:58.909 "superblock": false, 00:22:58.909 "num_base_bdevs": 3, 00:22:58.909 "num_base_bdevs_discovered": 3, 00:22:58.909 "num_base_bdevs_operational": 3, 00:22:58.909 "base_bdevs_list": [ 00:22:58.909 { 00:22:58.910 "name": "spare", 00:22:58.910 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:58.910 "is_configured": true, 00:22:58.910 "data_offset": 0, 00:22:58.910 "data_size": 65536 00:22:58.910 }, 00:22:58.910 { 00:22:58.910 "name": "BaseBdev2", 00:22:58.910 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:58.910 "is_configured": true, 00:22:58.910 "data_offset": 0, 00:22:58.910 "data_size": 65536 00:22:58.910 }, 00:22:58.910 { 00:22:58.910 "name": "BaseBdev3", 00:22:58.910 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:58.910 "is_configured": true, 00:22:58.910 "data_offset": 0, 00:22:58.910 "data_size": 65536 00:22:58.910 } 00:22:58.910 ] 00:22:58.910 }' 00:22:58.910 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.910 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:58.910 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.168 "name": "raid_bdev1", 00:22:59.168 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:59.168 "strip_size_kb": 64, 00:22:59.168 "state": "online", 00:22:59.168 "raid_level": "raid5f", 00:22:59.168 "superblock": false, 00:22:59.168 "num_base_bdevs": 3, 00:22:59.168 "num_base_bdevs_discovered": 3, 00:22:59.168 "num_base_bdevs_operational": 3, 00:22:59.168 "base_bdevs_list": [ 00:22:59.168 { 00:22:59.168 "name": "spare", 00:22:59.168 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:59.168 "is_configured": true, 00:22:59.168 "data_offset": 0, 00:22:59.168 "data_size": 65536 00:22:59.168 }, 00:22:59.168 { 00:22:59.168 "name": "BaseBdev2", 00:22:59.168 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:59.168 "is_configured": true, 00:22:59.168 "data_offset": 0, 00:22:59.168 "data_size": 65536 00:22:59.168 }, 00:22:59.168 { 00:22:59.168 "name": "BaseBdev3", 00:22:59.168 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:59.168 "is_configured": true, 00:22:59.168 "data_offset": 0, 00:22:59.168 "data_size": 65536 00:22:59.168 } 00:22:59.168 ] 00:22:59.168 }' 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.168 14:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.168 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.168 "name": "raid_bdev1", 00:22:59.168 "uuid": "0ad0c65d-3909-42e5-aad0-32b46e304e1e", 00:22:59.168 "strip_size_kb": 64, 00:22:59.168 "state": "online", 00:22:59.168 "raid_level": "raid5f", 00:22:59.168 "superblock": false, 00:22:59.168 "num_base_bdevs": 3, 00:22:59.168 "num_base_bdevs_discovered": 3, 00:22:59.168 "num_base_bdevs_operational": 3, 00:22:59.168 "base_bdevs_list": [ 00:22:59.168 { 00:22:59.168 "name": "spare", 00:22:59.168 "uuid": "0af99dff-0b48-5b34-9620-0f21ec223556", 00:22:59.168 "is_configured": true, 00:22:59.168 "data_offset": 0, 00:22:59.168 "data_size": 65536 00:22:59.168 }, 00:22:59.168 { 00:22:59.168 "name": "BaseBdev2", 00:22:59.168 "uuid": "3256283d-56d6-56d1-af7e-a20c3f62ce1c", 00:22:59.168 "is_configured": true, 00:22:59.168 "data_offset": 0, 00:22:59.168 "data_size": 65536 00:22:59.168 }, 00:22:59.168 { 00:22:59.168 "name": "BaseBdev3", 00:22:59.168 "uuid": "06d8cebc-6d36-5ad4-bf71-d8b6456a473a", 00:22:59.168 "is_configured": true, 00:22:59.168 "data_offset": 0, 00:22:59.168 "data_size": 65536 00:22:59.168 } 00:22:59.168 ] 00:22:59.168 }' 00:22:59.168 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.168 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.734 [2024-11-04 14:55:29.497219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.734 [2024-11-04 14:55:29.497293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.734 [2024-11-04 14:55:29.497395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.734 [2024-11-04 14:55:29.497502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.734 [2024-11-04 14:55:29.497527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:59.734 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.735 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:59.992 /dev/nbd0 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:00.250 1+0 records in 00:23:00.250 1+0 records out 00:23:00.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307245 s, 13.3 MB/s 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:00.250 14:55:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:00.508 /dev/nbd1 00:23:00.508 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:00.508 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:00.508 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:00.508 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:00.508 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:00.509 1+0 records in 00:23:00.509 1+0 records out 00:23:00.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442288 s, 9.3 MB/s 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:00.509 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.767 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:01.026 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82094 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82094 ']' 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82094 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:01.290 14:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82094 00:23:01.290 14:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:01.290 14:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:01.290 killing process with pid 82094 00:23:01.290 14:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82094' 00:23:01.290 14:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82094 00:23:01.290 Received shutdown signal, test time was about 60.000000 seconds 00:23:01.290 00:23:01.290 Latency(us) 00:23:01.290 [2024-11-04T14:55:31.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.290 [2024-11-04T14:55:31.182Z] =================================================================================================================== 00:23:01.290 [2024-11-04T14:55:31.182Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.290 [2024-11-04 14:55:31.011660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:01.291 14:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82094 00:23:01.548 [2024-11-04 14:55:31.376718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:02.920 00:23:02.920 real 0m16.478s 00:23:02.920 user 0m20.937s 00:23:02.920 sys 0m2.112s 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.920 ************************************ 00:23:02.920 END TEST raid5f_rebuild_test 00:23:02.920 ************************************ 00:23:02.920 14:55:32 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:23:02.920 14:55:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:02.920 14:55:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:02.920 14:55:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:02.920 ************************************ 00:23:02.920 START TEST raid5f_rebuild_test_sb 00:23:02.920 ************************************ 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82541 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82541 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82541 ']' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:02.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:02.920 14:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.920 [2024-11-04 14:55:32.651786] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:23:02.920 [2024-11-04 14:55:32.652013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82541 ] 00:23:02.920 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:02.920 Zero copy mechanism will not be used. 00:23:03.178 [2024-11-04 14:55:32.842481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.178 [2024-11-04 14:55:32.982794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.439 [2024-11-04 14:55:33.205437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.439 [2024-11-04 14:55:33.205545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 BaseBdev1_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 [2024-11-04 14:55:33.642453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:04.010 [2024-11-04 14:55:33.642608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.010 [2024-11-04 14:55:33.642654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:04.010 [2024-11-04 14:55:33.642672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.010 [2024-11-04 14:55:33.645643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.010 [2024-11-04 14:55:33.645692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:04.010 BaseBdev1 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 BaseBdev2_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 [2024-11-04 14:55:33.697021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:04.010 [2024-11-04 14:55:33.697097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.010 [2024-11-04 14:55:33.697124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:04.010 [2024-11-04 14:55:33.697144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.010 [2024-11-04 14:55:33.700062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.010 [2024-11-04 14:55:33.700111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:04.010 BaseBdev2 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 BaseBdev3_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 [2024-11-04 14:55:33.765218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:04.010 [2024-11-04 14:55:33.765335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.010 [2024-11-04 14:55:33.765364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:04.010 [2024-11-04 14:55:33.765381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.010 [2024-11-04 14:55:33.768697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.010 [2024-11-04 14:55:33.768754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:04.010 BaseBdev3 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 spare_malloc 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 spare_delay 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 [2024-11-04 14:55:33.830265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:04.010 [2024-11-04 14:55:33.830334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.010 [2024-11-04 14:55:33.830361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:04.010 [2024-11-04 14:55:33.830379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.010 [2024-11-04 14:55:33.833859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.010 [2024-11-04 14:55:33.833911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:04.010 spare 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 [2024-11-04 14:55:33.842655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.010 [2024-11-04 14:55:33.845736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.010 [2024-11-04 14:55:33.845842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:04.010 [2024-11-04 14:55:33.846129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:04.010 [2024-11-04 14:55:33.846159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:04.010 [2024-11-04 14:55:33.846509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:04.010 [2024-11-04 14:55:33.852345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:04.010 [2024-11-04 14:55:33.852398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:04.010 [2024-11-04 14:55:33.852676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.010 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.269 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.269 "name": "raid_bdev1", 00:23:04.269 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:04.269 "strip_size_kb": 64, 00:23:04.269 "state": "online", 00:23:04.269 "raid_level": "raid5f", 00:23:04.269 "superblock": true, 00:23:04.269 "num_base_bdevs": 3, 00:23:04.269 "num_base_bdevs_discovered": 3, 00:23:04.269 "num_base_bdevs_operational": 3, 00:23:04.269 "base_bdevs_list": [ 00:23:04.269 { 00:23:04.269 "name": "BaseBdev1", 00:23:04.269 "uuid": "f8a68425-5de4-51ad-a37d-c28a0bdec345", 00:23:04.269 "is_configured": true, 00:23:04.269 "data_offset": 2048, 00:23:04.269 "data_size": 63488 00:23:04.269 }, 00:23:04.269 { 00:23:04.269 "name": "BaseBdev2", 00:23:04.269 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:04.269 "is_configured": true, 00:23:04.269 "data_offset": 2048, 00:23:04.269 "data_size": 63488 00:23:04.269 }, 00:23:04.269 { 00:23:04.269 "name": "BaseBdev3", 00:23:04.269 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:04.269 "is_configured": true, 00:23:04.269 "data_offset": 2048, 00:23:04.269 "data_size": 63488 00:23:04.269 } 00:23:04.269 ] 00:23:04.269 }' 00:23:04.269 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.269 14:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.526 [2024-11-04 14:55:34.367298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.526 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.835 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:05.093 [2024-11-04 14:55:34.755253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:05.093 /dev/nbd0 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.093 1+0 records in 00:23:05.093 1+0 records out 00:23:05.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272935 s, 15.0 MB/s 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.093 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:23:05.094 14:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:05.659 496+0 records in 00:23:05.659 496+0 records out 00:23:05.659 65011712 bytes (65 MB, 62 MiB) copied, 0.492449 s, 132 MB/s 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.659 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:05.917 [2024-11-04 14:55:35.552146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.917 [2024-11-04 14:55:35.586609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.917 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.918 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.918 "name": "raid_bdev1", 00:23:05.918 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:05.918 "strip_size_kb": 64, 00:23:05.918 "state": "online", 00:23:05.918 "raid_level": "raid5f", 00:23:05.918 "superblock": true, 00:23:05.918 "num_base_bdevs": 3, 00:23:05.918 "num_base_bdevs_discovered": 2, 00:23:05.918 "num_base_bdevs_operational": 2, 00:23:05.918 "base_bdevs_list": [ 00:23:05.918 { 00:23:05.918 "name": null, 00:23:05.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.918 "is_configured": false, 00:23:05.918 "data_offset": 0, 00:23:05.918 "data_size": 63488 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "name": "BaseBdev2", 00:23:05.918 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:05.918 "is_configured": true, 00:23:05.918 "data_offset": 2048, 00:23:05.918 "data_size": 63488 00:23:05.918 }, 00:23:05.918 { 00:23:05.918 "name": "BaseBdev3", 00:23:05.918 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:05.918 "is_configured": true, 00:23:05.918 "data_offset": 2048, 00:23:05.918 "data_size": 63488 00:23:05.918 } 00:23:05.918 ] 00:23:05.918 }' 00:23:05.918 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.918 14:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.487 14:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:06.487 14:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.487 14:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.487 [2024-11-04 14:55:36.114922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:06.487 [2024-11-04 14:55:36.131585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:23:06.487 14:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.487 14:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:06.487 [2024-11-04 14:55:36.139424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.421 "name": "raid_bdev1", 00:23:07.421 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:07.421 "strip_size_kb": 64, 00:23:07.421 "state": "online", 00:23:07.421 "raid_level": "raid5f", 00:23:07.421 "superblock": true, 00:23:07.421 "num_base_bdevs": 3, 00:23:07.421 "num_base_bdevs_discovered": 3, 00:23:07.421 "num_base_bdevs_operational": 3, 00:23:07.421 "process": { 00:23:07.421 "type": "rebuild", 00:23:07.421 "target": "spare", 00:23:07.421 "progress": { 00:23:07.421 "blocks": 18432, 00:23:07.421 "percent": 14 00:23:07.421 } 00:23:07.421 }, 00:23:07.421 "base_bdevs_list": [ 00:23:07.421 { 00:23:07.421 "name": "spare", 00:23:07.421 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:07.421 "is_configured": true, 00:23:07.421 "data_offset": 2048, 00:23:07.421 "data_size": 63488 00:23:07.421 }, 00:23:07.421 { 00:23:07.421 "name": "BaseBdev2", 00:23:07.421 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:07.421 "is_configured": true, 00:23:07.421 "data_offset": 2048, 00:23:07.421 "data_size": 63488 00:23:07.421 }, 00:23:07.421 { 00:23:07.421 "name": "BaseBdev3", 00:23:07.421 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:07.421 "is_configured": true, 00:23:07.421 "data_offset": 2048, 00:23:07.421 "data_size": 63488 00:23:07.421 } 00:23:07.421 ] 00:23:07.421 }' 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.421 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.421 [2024-11-04 14:55:37.301245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.680 [2024-11-04 14:55:37.353299] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:07.680 [2024-11-04 14:55:37.353399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.680 [2024-11-04 14:55:37.353435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.680 [2024-11-04 14:55:37.353447] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.680 "name": "raid_bdev1", 00:23:07.680 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:07.680 "strip_size_kb": 64, 00:23:07.680 "state": "online", 00:23:07.680 "raid_level": "raid5f", 00:23:07.680 "superblock": true, 00:23:07.680 "num_base_bdevs": 3, 00:23:07.680 "num_base_bdevs_discovered": 2, 00:23:07.680 "num_base_bdevs_operational": 2, 00:23:07.680 "base_bdevs_list": [ 00:23:07.680 { 00:23:07.680 "name": null, 00:23:07.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.680 "is_configured": false, 00:23:07.680 "data_offset": 0, 00:23:07.680 "data_size": 63488 00:23:07.680 }, 00:23:07.680 { 00:23:07.680 "name": "BaseBdev2", 00:23:07.680 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:07.680 "is_configured": true, 00:23:07.680 "data_offset": 2048, 00:23:07.680 "data_size": 63488 00:23:07.680 }, 00:23:07.680 { 00:23:07.680 "name": "BaseBdev3", 00:23:07.680 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:07.680 "is_configured": true, 00:23:07.680 "data_offset": 2048, 00:23:07.680 "data_size": 63488 00:23:07.680 } 00:23:07.680 ] 00:23:07.680 }' 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.680 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.246 "name": "raid_bdev1", 00:23:08.246 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:08.246 "strip_size_kb": 64, 00:23:08.246 "state": "online", 00:23:08.246 "raid_level": "raid5f", 00:23:08.246 "superblock": true, 00:23:08.246 "num_base_bdevs": 3, 00:23:08.246 "num_base_bdevs_discovered": 2, 00:23:08.246 "num_base_bdevs_operational": 2, 00:23:08.246 "base_bdevs_list": [ 00:23:08.246 { 00:23:08.246 "name": null, 00:23:08.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.246 "is_configured": false, 00:23:08.246 "data_offset": 0, 00:23:08.246 "data_size": 63488 00:23:08.246 }, 00:23:08.246 { 00:23:08.246 "name": "BaseBdev2", 00:23:08.246 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:08.246 "is_configured": true, 00:23:08.246 "data_offset": 2048, 00:23:08.246 "data_size": 63488 00:23:08.246 }, 00:23:08.246 { 00:23:08.246 "name": "BaseBdev3", 00:23:08.246 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:08.246 "is_configured": true, 00:23:08.246 "data_offset": 2048, 00:23:08.246 "data_size": 63488 00:23:08.246 } 00:23:08.246 ] 00:23:08.246 }' 00:23:08.246 14:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.246 [2024-11-04 14:55:38.072364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.246 [2024-11-04 14:55:38.087528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.246 14:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:08.246 [2024-11-04 14:55:38.095028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:09.626 "name": "raid_bdev1", 00:23:09.626 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:09.626 "strip_size_kb": 64, 00:23:09.626 "state": "online", 00:23:09.626 "raid_level": "raid5f", 00:23:09.626 "superblock": true, 00:23:09.626 "num_base_bdevs": 3, 00:23:09.626 "num_base_bdevs_discovered": 3, 00:23:09.626 "num_base_bdevs_operational": 3, 00:23:09.626 "process": { 00:23:09.626 "type": "rebuild", 00:23:09.626 "target": "spare", 00:23:09.626 "progress": { 00:23:09.626 "blocks": 18432, 00:23:09.626 "percent": 14 00:23:09.626 } 00:23:09.626 }, 00:23:09.626 "base_bdevs_list": [ 00:23:09.626 { 00:23:09.626 "name": "spare", 00:23:09.626 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:09.626 "is_configured": true, 00:23:09.626 "data_offset": 2048, 00:23:09.626 "data_size": 63488 00:23:09.626 }, 00:23:09.626 { 00:23:09.626 "name": "BaseBdev2", 00:23:09.626 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:09.626 "is_configured": true, 00:23:09.626 "data_offset": 2048, 00:23:09.626 "data_size": 63488 00:23:09.626 }, 00:23:09.626 { 00:23:09.626 "name": "BaseBdev3", 00:23:09.626 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:09.626 "is_configured": true, 00:23:09.626 "data_offset": 2048, 00:23:09.626 "data_size": 63488 00:23:09.626 } 00:23:09.626 ] 00:23:09.626 }' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:09.626 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:09.626 "name": "raid_bdev1", 00:23:09.626 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:09.626 "strip_size_kb": 64, 00:23:09.626 "state": "online", 00:23:09.626 "raid_level": "raid5f", 00:23:09.626 "superblock": true, 00:23:09.626 "num_base_bdevs": 3, 00:23:09.626 "num_base_bdevs_discovered": 3, 00:23:09.626 "num_base_bdevs_operational": 3, 00:23:09.626 "process": { 00:23:09.626 "type": "rebuild", 00:23:09.626 "target": "spare", 00:23:09.626 "progress": { 00:23:09.626 "blocks": 22528, 00:23:09.626 "percent": 17 00:23:09.626 } 00:23:09.626 }, 00:23:09.626 "base_bdevs_list": [ 00:23:09.626 { 00:23:09.626 "name": "spare", 00:23:09.626 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:09.626 "is_configured": true, 00:23:09.626 "data_offset": 2048, 00:23:09.626 "data_size": 63488 00:23:09.626 }, 00:23:09.626 { 00:23:09.626 "name": "BaseBdev2", 00:23:09.626 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:09.626 "is_configured": true, 00:23:09.626 "data_offset": 2048, 00:23:09.626 "data_size": 63488 00:23:09.626 }, 00:23:09.626 { 00:23:09.626 "name": "BaseBdev3", 00:23:09.626 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:09.626 "is_configured": true, 00:23:09.626 "data_offset": 2048, 00:23:09.626 "data_size": 63488 00:23:09.626 } 00:23:09.626 ] 00:23:09.626 }' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.626 14:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.563 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.821 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.821 "name": "raid_bdev1", 00:23:10.821 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:10.821 "strip_size_kb": 64, 00:23:10.821 "state": "online", 00:23:10.821 "raid_level": "raid5f", 00:23:10.821 "superblock": true, 00:23:10.821 "num_base_bdevs": 3, 00:23:10.821 "num_base_bdevs_discovered": 3, 00:23:10.821 "num_base_bdevs_operational": 3, 00:23:10.821 "process": { 00:23:10.821 "type": "rebuild", 00:23:10.821 "target": "spare", 00:23:10.821 "progress": { 00:23:10.821 "blocks": 47104, 00:23:10.821 "percent": 37 00:23:10.821 } 00:23:10.821 }, 00:23:10.821 "base_bdevs_list": [ 00:23:10.821 { 00:23:10.821 "name": "spare", 00:23:10.821 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:10.821 "is_configured": true, 00:23:10.821 "data_offset": 2048, 00:23:10.821 "data_size": 63488 00:23:10.821 }, 00:23:10.821 { 00:23:10.821 "name": "BaseBdev2", 00:23:10.821 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:10.821 "is_configured": true, 00:23:10.821 "data_offset": 2048, 00:23:10.821 "data_size": 63488 00:23:10.821 }, 00:23:10.821 { 00:23:10.821 "name": "BaseBdev3", 00:23:10.821 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:10.821 "is_configured": true, 00:23:10.821 "data_offset": 2048, 00:23:10.821 "data_size": 63488 00:23:10.821 } 00:23:10.821 ] 00:23:10.821 }' 00:23:10.821 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.821 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.821 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.821 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.821 14:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.754 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.012 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:12.012 "name": "raid_bdev1", 00:23:12.012 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:12.012 "strip_size_kb": 64, 00:23:12.012 "state": "online", 00:23:12.012 "raid_level": "raid5f", 00:23:12.012 "superblock": true, 00:23:12.012 "num_base_bdevs": 3, 00:23:12.012 "num_base_bdevs_discovered": 3, 00:23:12.012 "num_base_bdevs_operational": 3, 00:23:12.012 "process": { 00:23:12.012 "type": "rebuild", 00:23:12.012 "target": "spare", 00:23:12.012 "progress": { 00:23:12.012 "blocks": 69632, 00:23:12.012 "percent": 54 00:23:12.012 } 00:23:12.012 }, 00:23:12.013 "base_bdevs_list": [ 00:23:12.013 { 00:23:12.013 "name": "spare", 00:23:12.013 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:12.013 "is_configured": true, 00:23:12.013 "data_offset": 2048, 00:23:12.013 "data_size": 63488 00:23:12.013 }, 00:23:12.013 { 00:23:12.013 "name": "BaseBdev2", 00:23:12.013 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:12.013 "is_configured": true, 00:23:12.013 "data_offset": 2048, 00:23:12.013 "data_size": 63488 00:23:12.013 }, 00:23:12.013 { 00:23:12.013 "name": "BaseBdev3", 00:23:12.013 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:12.013 "is_configured": true, 00:23:12.013 "data_offset": 2048, 00:23:12.013 "data_size": 63488 00:23:12.013 } 00:23:12.013 ] 00:23:12.013 }' 00:23:12.013 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:12.013 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:12.013 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:12.013 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:12.013 14:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:12.946 "name": "raid_bdev1", 00:23:12.946 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:12.946 "strip_size_kb": 64, 00:23:12.946 "state": "online", 00:23:12.946 "raid_level": "raid5f", 00:23:12.946 "superblock": true, 00:23:12.946 "num_base_bdevs": 3, 00:23:12.946 "num_base_bdevs_discovered": 3, 00:23:12.946 "num_base_bdevs_operational": 3, 00:23:12.946 "process": { 00:23:12.946 "type": "rebuild", 00:23:12.946 "target": "spare", 00:23:12.946 "progress": { 00:23:12.946 "blocks": 94208, 00:23:12.946 "percent": 74 00:23:12.946 } 00:23:12.946 }, 00:23:12.946 "base_bdevs_list": [ 00:23:12.946 { 00:23:12.946 "name": "spare", 00:23:12.946 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:12.946 "is_configured": true, 00:23:12.946 "data_offset": 2048, 00:23:12.946 "data_size": 63488 00:23:12.946 }, 00:23:12.946 { 00:23:12.946 "name": "BaseBdev2", 00:23:12.946 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:12.946 "is_configured": true, 00:23:12.946 "data_offset": 2048, 00:23:12.946 "data_size": 63488 00:23:12.946 }, 00:23:12.946 { 00:23:12.946 "name": "BaseBdev3", 00:23:12.946 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:12.946 "is_configured": true, 00:23:12.946 "data_offset": 2048, 00:23:12.946 "data_size": 63488 00:23:12.946 } 00:23:12.946 ] 00:23:12.946 }' 00:23:12.946 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.204 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.204 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.204 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.204 14:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.170 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.170 "name": "raid_bdev1", 00:23:14.170 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:14.170 "strip_size_kb": 64, 00:23:14.170 "state": "online", 00:23:14.170 "raid_level": "raid5f", 00:23:14.170 "superblock": true, 00:23:14.170 "num_base_bdevs": 3, 00:23:14.170 "num_base_bdevs_discovered": 3, 00:23:14.170 "num_base_bdevs_operational": 3, 00:23:14.170 "process": { 00:23:14.170 "type": "rebuild", 00:23:14.170 "target": "spare", 00:23:14.170 "progress": { 00:23:14.170 "blocks": 116736, 00:23:14.170 "percent": 91 00:23:14.170 } 00:23:14.170 }, 00:23:14.170 "base_bdevs_list": [ 00:23:14.170 { 00:23:14.170 "name": "spare", 00:23:14.170 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:14.170 "is_configured": true, 00:23:14.170 "data_offset": 2048, 00:23:14.170 "data_size": 63488 00:23:14.170 }, 00:23:14.170 { 00:23:14.170 "name": "BaseBdev2", 00:23:14.170 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:14.170 "is_configured": true, 00:23:14.170 "data_offset": 2048, 00:23:14.170 "data_size": 63488 00:23:14.170 }, 00:23:14.170 { 00:23:14.170 "name": "BaseBdev3", 00:23:14.170 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:14.170 "is_configured": true, 00:23:14.170 "data_offset": 2048, 00:23:14.170 "data_size": 63488 00:23:14.170 } 00:23:14.170 ] 00:23:14.171 }' 00:23:14.171 14:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:14.171 14:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.171 14:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:14.455 14:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.455 14:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:14.713 [2024-11-04 14:55:44.363880] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:14.713 [2024-11-04 14:55:44.364004] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:14.713 [2024-11-04 14:55:44.364171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.278 "name": "raid_bdev1", 00:23:15.278 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:15.278 "strip_size_kb": 64, 00:23:15.278 "state": "online", 00:23:15.278 "raid_level": "raid5f", 00:23:15.278 "superblock": true, 00:23:15.278 "num_base_bdevs": 3, 00:23:15.278 "num_base_bdevs_discovered": 3, 00:23:15.278 "num_base_bdevs_operational": 3, 00:23:15.278 "base_bdevs_list": [ 00:23:15.278 { 00:23:15.278 "name": "spare", 00:23:15.278 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:15.278 "is_configured": true, 00:23:15.278 "data_offset": 2048, 00:23:15.278 "data_size": 63488 00:23:15.278 }, 00:23:15.278 { 00:23:15.278 "name": "BaseBdev2", 00:23:15.278 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:15.278 "is_configured": true, 00:23:15.278 "data_offset": 2048, 00:23:15.278 "data_size": 63488 00:23:15.278 }, 00:23:15.278 { 00:23:15.278 "name": "BaseBdev3", 00:23:15.278 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:15.278 "is_configured": true, 00:23:15.278 "data_offset": 2048, 00:23:15.278 "data_size": 63488 00:23:15.278 } 00:23:15.278 ] 00:23:15.278 }' 00:23:15.278 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.536 "name": "raid_bdev1", 00:23:15.536 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:15.536 "strip_size_kb": 64, 00:23:15.536 "state": "online", 00:23:15.536 "raid_level": "raid5f", 00:23:15.536 "superblock": true, 00:23:15.536 "num_base_bdevs": 3, 00:23:15.536 "num_base_bdevs_discovered": 3, 00:23:15.536 "num_base_bdevs_operational": 3, 00:23:15.536 "base_bdevs_list": [ 00:23:15.536 { 00:23:15.536 "name": "spare", 00:23:15.536 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:15.536 "is_configured": true, 00:23:15.536 "data_offset": 2048, 00:23:15.536 "data_size": 63488 00:23:15.536 }, 00:23:15.536 { 00:23:15.536 "name": "BaseBdev2", 00:23:15.536 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:15.536 "is_configured": true, 00:23:15.536 "data_offset": 2048, 00:23:15.536 "data_size": 63488 00:23:15.536 }, 00:23:15.536 { 00:23:15.536 "name": "BaseBdev3", 00:23:15.536 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:15.536 "is_configured": true, 00:23:15.536 "data_offset": 2048, 00:23:15.536 "data_size": 63488 00:23:15.536 } 00:23:15.536 ] 00:23:15.536 }' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.536 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.795 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.795 "name": "raid_bdev1", 00:23:15.795 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:15.795 "strip_size_kb": 64, 00:23:15.795 "state": "online", 00:23:15.795 "raid_level": "raid5f", 00:23:15.795 "superblock": true, 00:23:15.795 "num_base_bdevs": 3, 00:23:15.795 "num_base_bdevs_discovered": 3, 00:23:15.795 "num_base_bdevs_operational": 3, 00:23:15.795 "base_bdevs_list": [ 00:23:15.795 { 00:23:15.795 "name": "spare", 00:23:15.795 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:15.795 "is_configured": true, 00:23:15.795 "data_offset": 2048, 00:23:15.795 "data_size": 63488 00:23:15.795 }, 00:23:15.795 { 00:23:15.795 "name": "BaseBdev2", 00:23:15.795 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:15.795 "is_configured": true, 00:23:15.795 "data_offset": 2048, 00:23:15.795 "data_size": 63488 00:23:15.795 }, 00:23:15.795 { 00:23:15.795 "name": "BaseBdev3", 00:23:15.795 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:15.795 "is_configured": true, 00:23:15.795 "data_offset": 2048, 00:23:15.795 "data_size": 63488 00:23:15.795 } 00:23:15.795 ] 00:23:15.795 }' 00:23:15.795 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.795 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.362 [2024-11-04 14:55:45.955139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.362 [2024-11-04 14:55:45.955174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.362 [2024-11-04 14:55:45.955349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.362 [2024-11-04 14:55:45.955448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.362 [2024-11-04 14:55:45.955478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:16.362 14:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:16.362 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:16.620 /dev/nbd0 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.620 1+0 records in 00:23:16.620 1+0 records out 00:23:16.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270483 s, 15.1 MB/s 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:16.620 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:16.879 /dev/nbd1 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:16.879 1+0 records in 00:23:16.879 1+0 records out 00:23:16.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441669 s, 9.3 MB/s 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:16.879 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.137 14:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:17.395 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.654 [2024-11-04 14:55:47.488068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:17.654 [2024-11-04 14:55:47.488148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.654 [2024-11-04 14:55:47.488179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:17.654 [2024-11-04 14:55:47.488196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.654 [2024-11-04 14:55:47.491137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.654 [2024-11-04 14:55:47.491186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:17.654 [2024-11-04 14:55:47.491323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:17.654 [2024-11-04 14:55:47.491412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.654 [2024-11-04 14:55:47.491640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:17.654 [2024-11-04 14:55:47.491797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:17.654 spare 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.654 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.913 [2024-11-04 14:55:47.591949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:17.913 [2024-11-04 14:55:47.592031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:17.913 [2024-11-04 14:55:47.592467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:23:17.913 [2024-11-04 14:55:47.597107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:17.913 [2024-11-04 14:55:47.597150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:17.913 [2024-11-04 14:55:47.597450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.913 "name": "raid_bdev1", 00:23:17.913 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:17.913 "strip_size_kb": 64, 00:23:17.913 "state": "online", 00:23:17.913 "raid_level": "raid5f", 00:23:17.913 "superblock": true, 00:23:17.913 "num_base_bdevs": 3, 00:23:17.913 "num_base_bdevs_discovered": 3, 00:23:17.913 "num_base_bdevs_operational": 3, 00:23:17.913 "base_bdevs_list": [ 00:23:17.913 { 00:23:17.913 "name": "spare", 00:23:17.913 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:17.913 "is_configured": true, 00:23:17.913 "data_offset": 2048, 00:23:17.913 "data_size": 63488 00:23:17.913 }, 00:23:17.913 { 00:23:17.913 "name": "BaseBdev2", 00:23:17.913 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:17.913 "is_configured": true, 00:23:17.913 "data_offset": 2048, 00:23:17.913 "data_size": 63488 00:23:17.913 }, 00:23:17.913 { 00:23:17.913 "name": "BaseBdev3", 00:23:17.913 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:17.913 "is_configured": true, 00:23:17.913 "data_offset": 2048, 00:23:17.913 "data_size": 63488 00:23:17.913 } 00:23:17.913 ] 00:23:17.913 }' 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.913 14:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.478 "name": "raid_bdev1", 00:23:18.478 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:18.478 "strip_size_kb": 64, 00:23:18.478 "state": "online", 00:23:18.478 "raid_level": "raid5f", 00:23:18.478 "superblock": true, 00:23:18.478 "num_base_bdevs": 3, 00:23:18.478 "num_base_bdevs_discovered": 3, 00:23:18.478 "num_base_bdevs_operational": 3, 00:23:18.478 "base_bdevs_list": [ 00:23:18.478 { 00:23:18.478 "name": "spare", 00:23:18.478 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:18.478 "is_configured": true, 00:23:18.478 "data_offset": 2048, 00:23:18.478 "data_size": 63488 00:23:18.478 }, 00:23:18.478 { 00:23:18.478 "name": "BaseBdev2", 00:23:18.478 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:18.478 "is_configured": true, 00:23:18.478 "data_offset": 2048, 00:23:18.478 "data_size": 63488 00:23:18.478 }, 00:23:18.478 { 00:23:18.478 "name": "BaseBdev3", 00:23:18.478 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:18.478 "is_configured": true, 00:23:18.478 "data_offset": 2048, 00:23:18.478 "data_size": 63488 00:23:18.478 } 00:23:18.478 ] 00:23:18.478 }' 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.478 [2024-11-04 14:55:48.335149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.478 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.736 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.736 "name": "raid_bdev1", 00:23:18.736 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:18.736 "strip_size_kb": 64, 00:23:18.736 "state": "online", 00:23:18.736 "raid_level": "raid5f", 00:23:18.736 "superblock": true, 00:23:18.736 "num_base_bdevs": 3, 00:23:18.736 "num_base_bdevs_discovered": 2, 00:23:18.736 "num_base_bdevs_operational": 2, 00:23:18.736 "base_bdevs_list": [ 00:23:18.736 { 00:23:18.736 "name": null, 00:23:18.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.736 "is_configured": false, 00:23:18.736 "data_offset": 0, 00:23:18.736 "data_size": 63488 00:23:18.736 }, 00:23:18.736 { 00:23:18.736 "name": "BaseBdev2", 00:23:18.736 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:18.736 "is_configured": true, 00:23:18.736 "data_offset": 2048, 00:23:18.736 "data_size": 63488 00:23:18.736 }, 00:23:18.736 { 00:23:18.736 "name": "BaseBdev3", 00:23:18.736 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:18.736 "is_configured": true, 00:23:18.736 "data_offset": 2048, 00:23:18.736 "data_size": 63488 00:23:18.736 } 00:23:18.736 ] 00:23:18.736 }' 00:23:18.736 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.736 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.994 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:18.994 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.994 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.994 [2024-11-04 14:55:48.875436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:18.994 [2024-11-04 14:55:48.875674] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:18.994 [2024-11-04 14:55:48.875711] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:18.994 [2024-11-04 14:55:48.875758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:19.252 [2024-11-04 14:55:48.890642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:23:19.252 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.252 14:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:19.252 [2024-11-04 14:55:48.897884] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.184 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:20.184 "name": "raid_bdev1", 00:23:20.184 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:20.184 "strip_size_kb": 64, 00:23:20.184 "state": "online", 00:23:20.184 "raid_level": "raid5f", 00:23:20.184 "superblock": true, 00:23:20.184 "num_base_bdevs": 3, 00:23:20.184 "num_base_bdevs_discovered": 3, 00:23:20.184 "num_base_bdevs_operational": 3, 00:23:20.184 "process": { 00:23:20.184 "type": "rebuild", 00:23:20.184 "target": "spare", 00:23:20.184 "progress": { 00:23:20.184 "blocks": 18432, 00:23:20.184 "percent": 14 00:23:20.184 } 00:23:20.184 }, 00:23:20.184 "base_bdevs_list": [ 00:23:20.184 { 00:23:20.184 "name": "spare", 00:23:20.184 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:20.184 "is_configured": true, 00:23:20.184 "data_offset": 2048, 00:23:20.184 "data_size": 63488 00:23:20.184 }, 00:23:20.184 { 00:23:20.184 "name": "BaseBdev2", 00:23:20.184 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:20.184 "is_configured": true, 00:23:20.184 "data_offset": 2048, 00:23:20.184 "data_size": 63488 00:23:20.184 }, 00:23:20.184 { 00:23:20.184 "name": "BaseBdev3", 00:23:20.185 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:20.185 "is_configured": true, 00:23:20.185 "data_offset": 2048, 00:23:20.185 "data_size": 63488 00:23:20.185 } 00:23:20.185 ] 00:23:20.185 }' 00:23:20.185 14:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:20.185 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:20.185 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:20.185 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.185 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:20.185 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.185 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.185 [2024-11-04 14:55:50.067168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:20.442 [2024-11-04 14:55:50.111368] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:20.442 [2024-11-04 14:55:50.111452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.442 [2024-11-04 14:55:50.111477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:20.442 [2024-11-04 14:55:50.111491] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.442 "name": "raid_bdev1", 00:23:20.442 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:20.442 "strip_size_kb": 64, 00:23:20.442 "state": "online", 00:23:20.442 "raid_level": "raid5f", 00:23:20.442 "superblock": true, 00:23:20.442 "num_base_bdevs": 3, 00:23:20.442 "num_base_bdevs_discovered": 2, 00:23:20.442 "num_base_bdevs_operational": 2, 00:23:20.442 "base_bdevs_list": [ 00:23:20.442 { 00:23:20.442 "name": null, 00:23:20.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.442 "is_configured": false, 00:23:20.442 "data_offset": 0, 00:23:20.442 "data_size": 63488 00:23:20.442 }, 00:23:20.442 { 00:23:20.442 "name": "BaseBdev2", 00:23:20.442 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:20.442 "is_configured": true, 00:23:20.442 "data_offset": 2048, 00:23:20.442 "data_size": 63488 00:23:20.442 }, 00:23:20.442 { 00:23:20.442 "name": "BaseBdev3", 00:23:20.442 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:20.442 "is_configured": true, 00:23:20.442 "data_offset": 2048, 00:23:20.442 "data_size": 63488 00:23:20.442 } 00:23:20.442 ] 00:23:20.442 }' 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.442 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.007 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:21.007 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.007 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.007 [2024-11-04 14:55:50.672463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:21.007 [2024-11-04 14:55:50.672552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.007 [2024-11-04 14:55:50.672584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:21.007 [2024-11-04 14:55:50.672605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.007 [2024-11-04 14:55:50.673215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.007 [2024-11-04 14:55:50.673291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:21.007 [2024-11-04 14:55:50.673415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:21.007 [2024-11-04 14:55:50.673439] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:21.007 [2024-11-04 14:55:50.673453] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:21.007 [2024-11-04 14:55:50.673495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:21.007 [2024-11-04 14:55:50.688453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:23:21.007 spare 00:23:21.007 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.007 14:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:21.007 [2024-11-04 14:55:50.695669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.942 "name": "raid_bdev1", 00:23:21.942 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:21.942 "strip_size_kb": 64, 00:23:21.942 "state": "online", 00:23:21.942 "raid_level": "raid5f", 00:23:21.942 "superblock": true, 00:23:21.942 "num_base_bdevs": 3, 00:23:21.942 "num_base_bdevs_discovered": 3, 00:23:21.942 "num_base_bdevs_operational": 3, 00:23:21.942 "process": { 00:23:21.942 "type": "rebuild", 00:23:21.942 "target": "spare", 00:23:21.942 "progress": { 00:23:21.942 "blocks": 18432, 00:23:21.942 "percent": 14 00:23:21.942 } 00:23:21.942 }, 00:23:21.942 "base_bdevs_list": [ 00:23:21.942 { 00:23:21.942 "name": "spare", 00:23:21.942 "uuid": "4f5b3fe8-bffa-57b8-af27-cb8fe6708752", 00:23:21.942 "is_configured": true, 00:23:21.942 "data_offset": 2048, 00:23:21.942 "data_size": 63488 00:23:21.942 }, 00:23:21.942 { 00:23:21.942 "name": "BaseBdev2", 00:23:21.942 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:21.942 "is_configured": true, 00:23:21.942 "data_offset": 2048, 00:23:21.942 "data_size": 63488 00:23:21.942 }, 00:23:21.942 { 00:23:21.942 "name": "BaseBdev3", 00:23:21.942 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:21.942 "is_configured": true, 00:23:21.942 "data_offset": 2048, 00:23:21.942 "data_size": 63488 00:23:21.942 } 00:23:21.942 ] 00:23:21.942 }' 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.942 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.200 [2024-11-04 14:55:51.861468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.200 [2024-11-04 14:55:51.909806] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:22.200 [2024-11-04 14:55:51.909914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.200 [2024-11-04 14:55:51.909944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.200 [2024-11-04 14:55:51.909956] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.200 "name": "raid_bdev1", 00:23:22.200 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:22.200 "strip_size_kb": 64, 00:23:22.200 "state": "online", 00:23:22.200 "raid_level": "raid5f", 00:23:22.200 "superblock": true, 00:23:22.200 "num_base_bdevs": 3, 00:23:22.200 "num_base_bdevs_discovered": 2, 00:23:22.200 "num_base_bdevs_operational": 2, 00:23:22.200 "base_bdevs_list": [ 00:23:22.200 { 00:23:22.200 "name": null, 00:23:22.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.200 "is_configured": false, 00:23:22.200 "data_offset": 0, 00:23:22.200 "data_size": 63488 00:23:22.200 }, 00:23:22.200 { 00:23:22.200 "name": "BaseBdev2", 00:23:22.200 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:22.200 "is_configured": true, 00:23:22.200 "data_offset": 2048, 00:23:22.200 "data_size": 63488 00:23:22.200 }, 00:23:22.200 { 00:23:22.200 "name": "BaseBdev3", 00:23:22.200 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:22.200 "is_configured": true, 00:23:22.200 "data_offset": 2048, 00:23:22.200 "data_size": 63488 00:23:22.200 } 00:23:22.200 ] 00:23:22.200 }' 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.200 14:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.765 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.765 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.766 "name": "raid_bdev1", 00:23:22.766 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:22.766 "strip_size_kb": 64, 00:23:22.766 "state": "online", 00:23:22.766 "raid_level": "raid5f", 00:23:22.766 "superblock": true, 00:23:22.766 "num_base_bdevs": 3, 00:23:22.766 "num_base_bdevs_discovered": 2, 00:23:22.766 "num_base_bdevs_operational": 2, 00:23:22.766 "base_bdevs_list": [ 00:23:22.766 { 00:23:22.766 "name": null, 00:23:22.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.766 "is_configured": false, 00:23:22.766 "data_offset": 0, 00:23:22.766 "data_size": 63488 00:23:22.766 }, 00:23:22.766 { 00:23:22.766 "name": "BaseBdev2", 00:23:22.766 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:22.766 "is_configured": true, 00:23:22.766 "data_offset": 2048, 00:23:22.766 "data_size": 63488 00:23:22.766 }, 00:23:22.766 { 00:23:22.766 "name": "BaseBdev3", 00:23:22.766 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:22.766 "is_configured": true, 00:23:22.766 "data_offset": 2048, 00:23:22.766 "data_size": 63488 00:23:22.766 } 00:23:22.766 ] 00:23:22.766 }' 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.766 [2024-11-04 14:55:52.641822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:22.766 [2024-11-04 14:55:52.641905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.766 [2024-11-04 14:55:52.641939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:22.766 [2024-11-04 14:55:52.641954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.766 [2024-11-04 14:55:52.642548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.766 [2024-11-04 14:55:52.642584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:22.766 [2024-11-04 14:55:52.642687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:22.766 [2024-11-04 14:55:52.642709] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:22.766 [2024-11-04 14:55:52.642739] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:22.766 [2024-11-04 14:55:52.642752] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:22.766 BaseBdev1 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.766 14:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.138 "name": "raid_bdev1", 00:23:24.138 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:24.138 "strip_size_kb": 64, 00:23:24.138 "state": "online", 00:23:24.138 "raid_level": "raid5f", 00:23:24.138 "superblock": true, 00:23:24.138 "num_base_bdevs": 3, 00:23:24.138 "num_base_bdevs_discovered": 2, 00:23:24.138 "num_base_bdevs_operational": 2, 00:23:24.138 "base_bdevs_list": [ 00:23:24.138 { 00:23:24.138 "name": null, 00:23:24.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.138 "is_configured": false, 00:23:24.138 "data_offset": 0, 00:23:24.138 "data_size": 63488 00:23:24.138 }, 00:23:24.138 { 00:23:24.138 "name": "BaseBdev2", 00:23:24.138 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:24.138 "is_configured": true, 00:23:24.138 "data_offset": 2048, 00:23:24.138 "data_size": 63488 00:23:24.138 }, 00:23:24.138 { 00:23:24.138 "name": "BaseBdev3", 00:23:24.138 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:24.138 "is_configured": true, 00:23:24.138 "data_offset": 2048, 00:23:24.138 "data_size": 63488 00:23:24.138 } 00:23:24.138 ] 00:23:24.138 }' 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.138 14:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:24.395 "name": "raid_bdev1", 00:23:24.395 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:24.395 "strip_size_kb": 64, 00:23:24.395 "state": "online", 00:23:24.395 "raid_level": "raid5f", 00:23:24.395 "superblock": true, 00:23:24.395 "num_base_bdevs": 3, 00:23:24.395 "num_base_bdevs_discovered": 2, 00:23:24.395 "num_base_bdevs_operational": 2, 00:23:24.395 "base_bdevs_list": [ 00:23:24.395 { 00:23:24.395 "name": null, 00:23:24.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.395 "is_configured": false, 00:23:24.395 "data_offset": 0, 00:23:24.395 "data_size": 63488 00:23:24.395 }, 00:23:24.395 { 00:23:24.395 "name": "BaseBdev2", 00:23:24.395 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:24.395 "is_configured": true, 00:23:24.395 "data_offset": 2048, 00:23:24.395 "data_size": 63488 00:23:24.395 }, 00:23:24.395 { 00:23:24.395 "name": "BaseBdev3", 00:23:24.395 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:24.395 "is_configured": true, 00:23:24.395 "data_offset": 2048, 00:23:24.395 "data_size": 63488 00:23:24.395 } 00:23:24.395 ] 00:23:24.395 }' 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:24.395 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.652 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.652 [2024-11-04 14:55:54.330466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:24.652 [2024-11-04 14:55:54.330714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:24.652 [2024-11-04 14:55:54.330742] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:24.652 request: 00:23:24.652 { 00:23:24.652 "base_bdev": "BaseBdev1", 00:23:24.653 "raid_bdev": "raid_bdev1", 00:23:24.653 "method": "bdev_raid_add_base_bdev", 00:23:24.653 "req_id": 1 00:23:24.653 } 00:23:24.653 Got JSON-RPC error response 00:23:24.653 response: 00:23:24.653 { 00:23:24.653 "code": -22, 00:23:24.653 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:24.653 } 00:23:24.653 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:24.653 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:24.653 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:24.653 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:24.653 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:24.653 14:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.587 "name": "raid_bdev1", 00:23:25.587 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:25.587 "strip_size_kb": 64, 00:23:25.587 "state": "online", 00:23:25.587 "raid_level": "raid5f", 00:23:25.587 "superblock": true, 00:23:25.587 "num_base_bdevs": 3, 00:23:25.587 "num_base_bdevs_discovered": 2, 00:23:25.587 "num_base_bdevs_operational": 2, 00:23:25.587 "base_bdevs_list": [ 00:23:25.587 { 00:23:25.587 "name": null, 00:23:25.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.587 "is_configured": false, 00:23:25.587 "data_offset": 0, 00:23:25.587 "data_size": 63488 00:23:25.587 }, 00:23:25.587 { 00:23:25.587 "name": "BaseBdev2", 00:23:25.587 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:25.587 "is_configured": true, 00:23:25.587 "data_offset": 2048, 00:23:25.587 "data_size": 63488 00:23:25.587 }, 00:23:25.587 { 00:23:25.587 "name": "BaseBdev3", 00:23:25.587 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:25.587 "is_configured": true, 00:23:25.587 "data_offset": 2048, 00:23:25.587 "data_size": 63488 00:23:25.587 } 00:23:25.587 ] 00:23:25.587 }' 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.587 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:26.153 "name": "raid_bdev1", 00:23:26.153 "uuid": "bcc13fbd-04fd-476c-b345-c67fb500f052", 00:23:26.153 "strip_size_kb": 64, 00:23:26.153 "state": "online", 00:23:26.153 "raid_level": "raid5f", 00:23:26.153 "superblock": true, 00:23:26.153 "num_base_bdevs": 3, 00:23:26.153 "num_base_bdevs_discovered": 2, 00:23:26.153 "num_base_bdevs_operational": 2, 00:23:26.153 "base_bdevs_list": [ 00:23:26.153 { 00:23:26.153 "name": null, 00:23:26.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.153 "is_configured": false, 00:23:26.153 "data_offset": 0, 00:23:26.153 "data_size": 63488 00:23:26.153 }, 00:23:26.153 { 00:23:26.153 "name": "BaseBdev2", 00:23:26.153 "uuid": "552dac94-7c04-5231-ba88-e93b68144cfc", 00:23:26.153 "is_configured": true, 00:23:26.153 "data_offset": 2048, 00:23:26.153 "data_size": 63488 00:23:26.153 }, 00:23:26.153 { 00:23:26.153 "name": "BaseBdev3", 00:23:26.153 "uuid": "4173c77b-9ec1-58a3-b6bf-61d9b007d830", 00:23:26.153 "is_configured": true, 00:23:26.153 "data_offset": 2048, 00:23:26.153 "data_size": 63488 00:23:26.153 } 00:23:26.153 ] 00:23:26.153 }' 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:26.153 14:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82541 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82541 ']' 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82541 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82541 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:26.153 killing process with pid 82541 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82541' 00:23:26.153 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82541 00:23:26.153 Received shutdown signal, test time was about 60.000000 seconds 00:23:26.153 00:23:26.153 Latency(us) 00:23:26.153 [2024-11-04T14:55:56.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.153 [2024-11-04T14:55:56.046Z] =================================================================================================================== 00:23:26.154 [2024-11-04T14:55:56.046Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.154 [2024-11-04 14:55:56.037040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:26.154 14:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82541 00:23:26.154 [2024-11-04 14:55:56.037192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.154 [2024-11-04 14:55:56.037288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.154 [2024-11-04 14:55:56.037311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:26.720 [2024-11-04 14:55:56.457955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:28.095 14:55:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:28.095 00:23:28.095 real 0m25.086s 00:23:28.095 user 0m33.303s 00:23:28.095 sys 0m2.702s 00:23:28.095 14:55:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:28.095 14:55:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.095 ************************************ 00:23:28.095 END TEST raid5f_rebuild_test_sb 00:23:28.095 ************************************ 00:23:28.095 14:55:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:23:28.095 14:55:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:28.095 14:55:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:28.095 14:55:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:28.095 14:55:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:28.095 ************************************ 00:23:28.095 START TEST raid5f_state_function_test 00:23:28.095 ************************************ 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83310 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83310' 00:23:28.095 Process raid pid: 83310 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83310 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83310 ']' 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:28.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:28.095 14:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.095 [2024-11-04 14:55:57.800417] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:23:28.095 [2024-11-04 14:55:57.801535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.353 [2024-11-04 14:55:58.022054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.353 [2024-11-04 14:55:58.194114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.612 [2024-11-04 14:55:58.445931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.612 [2024-11-04 14:55:58.446022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 [2024-11-04 14:55:58.810091] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:29.179 [2024-11-04 14:55:58.810162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:29.179 [2024-11-04 14:55:58.810179] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:29.179 [2024-11-04 14:55:58.810212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:29.179 [2024-11-04 14:55:58.810223] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:29.179 [2024-11-04 14:55:58.810265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:29.179 [2024-11-04 14:55:58.810278] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:29.179 [2024-11-04 14:55:58.810294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.179 "name": "Existed_Raid", 00:23:29.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.179 "strip_size_kb": 64, 00:23:29.179 "state": "configuring", 00:23:29.179 "raid_level": "raid5f", 00:23:29.179 "superblock": false, 00:23:29.179 "num_base_bdevs": 4, 00:23:29.179 "num_base_bdevs_discovered": 0, 00:23:29.179 "num_base_bdevs_operational": 4, 00:23:29.179 "base_bdevs_list": [ 00:23:29.179 { 00:23:29.179 "name": "BaseBdev1", 00:23:29.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.179 "is_configured": false, 00:23:29.179 "data_offset": 0, 00:23:29.179 "data_size": 0 00:23:29.179 }, 00:23:29.179 { 00:23:29.179 "name": "BaseBdev2", 00:23:29.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.179 "is_configured": false, 00:23:29.179 "data_offset": 0, 00:23:29.179 "data_size": 0 00:23:29.179 }, 00:23:29.179 { 00:23:29.179 "name": "BaseBdev3", 00:23:29.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.179 "is_configured": false, 00:23:29.179 "data_offset": 0, 00:23:29.179 "data_size": 0 00:23:29.179 }, 00:23:29.179 { 00:23:29.179 "name": "BaseBdev4", 00:23:29.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.179 "is_configured": false, 00:23:29.179 "data_offset": 0, 00:23:29.179 "data_size": 0 00:23:29.179 } 00:23:29.179 ] 00:23:29.179 }' 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.179 14:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.437 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:29.437 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.437 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.437 [2024-11-04 14:55:59.314301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:29.437 [2024-11-04 14:55:59.314385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.438 [2024-11-04 14:55:59.322286] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:29.438 [2024-11-04 14:55:59.322361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:29.438 [2024-11-04 14:55:59.322377] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:29.438 [2024-11-04 14:55:59.322394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:29.438 [2024-11-04 14:55:59.322404] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:29.438 [2024-11-04 14:55:59.322419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:29.438 [2024-11-04 14:55:59.322429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:29.438 [2024-11-04 14:55:59.322443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.438 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.696 [2024-11-04 14:55:59.368377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:29.696 BaseBdev1 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.696 [ 00:23:29.696 { 00:23:29.696 "name": "BaseBdev1", 00:23:29.696 "aliases": [ 00:23:29.696 "5f4e0da7-99e4-4407-91be-5c4d98d35be9" 00:23:29.696 ], 00:23:29.696 "product_name": "Malloc disk", 00:23:29.696 "block_size": 512, 00:23:29.696 "num_blocks": 65536, 00:23:29.696 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:29.696 "assigned_rate_limits": { 00:23:29.696 "rw_ios_per_sec": 0, 00:23:29.696 "rw_mbytes_per_sec": 0, 00:23:29.696 "r_mbytes_per_sec": 0, 00:23:29.696 "w_mbytes_per_sec": 0 00:23:29.696 }, 00:23:29.696 "claimed": true, 00:23:29.696 "claim_type": "exclusive_write", 00:23:29.696 "zoned": false, 00:23:29.696 "supported_io_types": { 00:23:29.696 "read": true, 00:23:29.696 "write": true, 00:23:29.696 "unmap": true, 00:23:29.696 "flush": true, 00:23:29.696 "reset": true, 00:23:29.696 "nvme_admin": false, 00:23:29.696 "nvme_io": false, 00:23:29.696 "nvme_io_md": false, 00:23:29.696 "write_zeroes": true, 00:23:29.696 "zcopy": true, 00:23:29.696 "get_zone_info": false, 00:23:29.696 "zone_management": false, 00:23:29.696 "zone_append": false, 00:23:29.696 "compare": false, 00:23:29.696 "compare_and_write": false, 00:23:29.696 "abort": true, 00:23:29.696 "seek_hole": false, 00:23:29.696 "seek_data": false, 00:23:29.696 "copy": true, 00:23:29.696 "nvme_iov_md": false 00:23:29.696 }, 00:23:29.696 "memory_domains": [ 00:23:29.696 { 00:23:29.696 "dma_device_id": "system", 00:23:29.696 "dma_device_type": 1 00:23:29.696 }, 00:23:29.696 { 00:23:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.696 "dma_device_type": 2 00:23:29.696 } 00:23:29.696 ], 00:23:29.696 "driver_specific": {} 00:23:29.696 } 00:23:29.696 ] 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.696 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.696 "name": "Existed_Raid", 00:23:29.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.697 "strip_size_kb": 64, 00:23:29.697 "state": "configuring", 00:23:29.697 "raid_level": "raid5f", 00:23:29.697 "superblock": false, 00:23:29.697 "num_base_bdevs": 4, 00:23:29.697 "num_base_bdevs_discovered": 1, 00:23:29.697 "num_base_bdevs_operational": 4, 00:23:29.697 "base_bdevs_list": [ 00:23:29.697 { 00:23:29.697 "name": "BaseBdev1", 00:23:29.697 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:29.697 "is_configured": true, 00:23:29.697 "data_offset": 0, 00:23:29.697 "data_size": 65536 00:23:29.697 }, 00:23:29.697 { 00:23:29.697 "name": "BaseBdev2", 00:23:29.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.697 "is_configured": false, 00:23:29.697 "data_offset": 0, 00:23:29.697 "data_size": 0 00:23:29.697 }, 00:23:29.697 { 00:23:29.697 "name": "BaseBdev3", 00:23:29.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.697 "is_configured": false, 00:23:29.697 "data_offset": 0, 00:23:29.697 "data_size": 0 00:23:29.697 }, 00:23:29.697 { 00:23:29.697 "name": "BaseBdev4", 00:23:29.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.697 "is_configured": false, 00:23:29.697 "data_offset": 0, 00:23:29.697 "data_size": 0 00:23:29.697 } 00:23:29.697 ] 00:23:29.697 }' 00:23:29.697 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.697 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.263 [2024-11-04 14:55:59.932684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:30.263 [2024-11-04 14:55:59.932768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.263 [2024-11-04 14:55:59.940718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:30.263 [2024-11-04 14:55:59.943480] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:30.263 [2024-11-04 14:55:59.943538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:30.263 [2024-11-04 14:55:59.943555] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:30.263 [2024-11-04 14:55:59.943573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:30.263 [2024-11-04 14:55:59.943583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:30.263 [2024-11-04 14:55:59.943597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.263 "name": "Existed_Raid", 00:23:30.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.263 "strip_size_kb": 64, 00:23:30.263 "state": "configuring", 00:23:30.263 "raid_level": "raid5f", 00:23:30.263 "superblock": false, 00:23:30.263 "num_base_bdevs": 4, 00:23:30.263 "num_base_bdevs_discovered": 1, 00:23:30.263 "num_base_bdevs_operational": 4, 00:23:30.263 "base_bdevs_list": [ 00:23:30.263 { 00:23:30.263 "name": "BaseBdev1", 00:23:30.263 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:30.263 "is_configured": true, 00:23:30.263 "data_offset": 0, 00:23:30.263 "data_size": 65536 00:23:30.263 }, 00:23:30.263 { 00:23:30.263 "name": "BaseBdev2", 00:23:30.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.263 "is_configured": false, 00:23:30.263 "data_offset": 0, 00:23:30.263 "data_size": 0 00:23:30.263 }, 00:23:30.263 { 00:23:30.263 "name": "BaseBdev3", 00:23:30.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.263 "is_configured": false, 00:23:30.263 "data_offset": 0, 00:23:30.263 "data_size": 0 00:23:30.263 }, 00:23:30.263 { 00:23:30.263 "name": "BaseBdev4", 00:23:30.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.263 "is_configured": false, 00:23:30.263 "data_offset": 0, 00:23:30.263 "data_size": 0 00:23:30.263 } 00:23:30.263 ] 00:23:30.263 }' 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.263 14:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.830 [2024-11-04 14:56:00.535844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:30.830 BaseBdev2 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.830 [ 00:23:30.830 { 00:23:30.830 "name": "BaseBdev2", 00:23:30.830 "aliases": [ 00:23:30.830 "74de5ef6-8e06-4a5e-88c9-23df228926ff" 00:23:30.830 ], 00:23:30.830 "product_name": "Malloc disk", 00:23:30.830 "block_size": 512, 00:23:30.830 "num_blocks": 65536, 00:23:30.830 "uuid": "74de5ef6-8e06-4a5e-88c9-23df228926ff", 00:23:30.830 "assigned_rate_limits": { 00:23:30.830 "rw_ios_per_sec": 0, 00:23:30.830 "rw_mbytes_per_sec": 0, 00:23:30.830 "r_mbytes_per_sec": 0, 00:23:30.830 "w_mbytes_per_sec": 0 00:23:30.830 }, 00:23:30.830 "claimed": true, 00:23:30.830 "claim_type": "exclusive_write", 00:23:30.830 "zoned": false, 00:23:30.830 "supported_io_types": { 00:23:30.830 "read": true, 00:23:30.830 "write": true, 00:23:30.830 "unmap": true, 00:23:30.830 "flush": true, 00:23:30.830 "reset": true, 00:23:30.830 "nvme_admin": false, 00:23:30.830 "nvme_io": false, 00:23:30.830 "nvme_io_md": false, 00:23:30.830 "write_zeroes": true, 00:23:30.830 "zcopy": true, 00:23:30.830 "get_zone_info": false, 00:23:30.830 "zone_management": false, 00:23:30.830 "zone_append": false, 00:23:30.830 "compare": false, 00:23:30.830 "compare_and_write": false, 00:23:30.830 "abort": true, 00:23:30.830 "seek_hole": false, 00:23:30.830 "seek_data": false, 00:23:30.830 "copy": true, 00:23:30.830 "nvme_iov_md": false 00:23:30.830 }, 00:23:30.830 "memory_domains": [ 00:23:30.830 { 00:23:30.830 "dma_device_id": "system", 00:23:30.830 "dma_device_type": 1 00:23:30.830 }, 00:23:30.830 { 00:23:30.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.830 "dma_device_type": 2 00:23:30.830 } 00:23:30.830 ], 00:23:30.830 "driver_specific": {} 00:23:30.830 } 00:23:30.830 ] 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.830 "name": "Existed_Raid", 00:23:30.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.830 "strip_size_kb": 64, 00:23:30.830 "state": "configuring", 00:23:30.830 "raid_level": "raid5f", 00:23:30.830 "superblock": false, 00:23:30.830 "num_base_bdevs": 4, 00:23:30.830 "num_base_bdevs_discovered": 2, 00:23:30.830 "num_base_bdevs_operational": 4, 00:23:30.830 "base_bdevs_list": [ 00:23:30.830 { 00:23:30.830 "name": "BaseBdev1", 00:23:30.830 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:30.830 "is_configured": true, 00:23:30.830 "data_offset": 0, 00:23:30.830 "data_size": 65536 00:23:30.830 }, 00:23:30.830 { 00:23:30.830 "name": "BaseBdev2", 00:23:30.830 "uuid": "74de5ef6-8e06-4a5e-88c9-23df228926ff", 00:23:30.830 "is_configured": true, 00:23:30.830 "data_offset": 0, 00:23:30.830 "data_size": 65536 00:23:30.830 }, 00:23:30.830 { 00:23:30.830 "name": "BaseBdev3", 00:23:30.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.830 "is_configured": false, 00:23:30.830 "data_offset": 0, 00:23:30.830 "data_size": 0 00:23:30.830 }, 00:23:30.830 { 00:23:30.830 "name": "BaseBdev4", 00:23:30.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.830 "is_configured": false, 00:23:30.830 "data_offset": 0, 00:23:30.830 "data_size": 0 00:23:30.830 } 00:23:30.830 ] 00:23:30.830 }' 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.830 14:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.397 [2024-11-04 14:56:01.129427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:31.397 BaseBdev3 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.397 [ 00:23:31.397 { 00:23:31.397 "name": "BaseBdev3", 00:23:31.397 "aliases": [ 00:23:31.397 "fdd29342-6f75-4ed8-afcf-bf2f7558d8d7" 00:23:31.397 ], 00:23:31.397 "product_name": "Malloc disk", 00:23:31.397 "block_size": 512, 00:23:31.397 "num_blocks": 65536, 00:23:31.397 "uuid": "fdd29342-6f75-4ed8-afcf-bf2f7558d8d7", 00:23:31.397 "assigned_rate_limits": { 00:23:31.397 "rw_ios_per_sec": 0, 00:23:31.397 "rw_mbytes_per_sec": 0, 00:23:31.397 "r_mbytes_per_sec": 0, 00:23:31.397 "w_mbytes_per_sec": 0 00:23:31.397 }, 00:23:31.397 "claimed": true, 00:23:31.397 "claim_type": "exclusive_write", 00:23:31.397 "zoned": false, 00:23:31.397 "supported_io_types": { 00:23:31.397 "read": true, 00:23:31.397 "write": true, 00:23:31.397 "unmap": true, 00:23:31.397 "flush": true, 00:23:31.397 "reset": true, 00:23:31.397 "nvme_admin": false, 00:23:31.397 "nvme_io": false, 00:23:31.397 "nvme_io_md": false, 00:23:31.397 "write_zeroes": true, 00:23:31.397 "zcopy": true, 00:23:31.397 "get_zone_info": false, 00:23:31.397 "zone_management": false, 00:23:31.397 "zone_append": false, 00:23:31.397 "compare": false, 00:23:31.397 "compare_and_write": false, 00:23:31.397 "abort": true, 00:23:31.397 "seek_hole": false, 00:23:31.397 "seek_data": false, 00:23:31.397 "copy": true, 00:23:31.397 "nvme_iov_md": false 00:23:31.397 }, 00:23:31.397 "memory_domains": [ 00:23:31.397 { 00:23:31.397 "dma_device_id": "system", 00:23:31.397 "dma_device_type": 1 00:23:31.397 }, 00:23:31.397 { 00:23:31.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.397 "dma_device_type": 2 00:23:31.397 } 00:23:31.397 ], 00:23:31.397 "driver_specific": {} 00:23:31.397 } 00:23:31.397 ] 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.397 "name": "Existed_Raid", 00:23:31.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.397 "strip_size_kb": 64, 00:23:31.397 "state": "configuring", 00:23:31.397 "raid_level": "raid5f", 00:23:31.397 "superblock": false, 00:23:31.397 "num_base_bdevs": 4, 00:23:31.397 "num_base_bdevs_discovered": 3, 00:23:31.397 "num_base_bdevs_operational": 4, 00:23:31.397 "base_bdevs_list": [ 00:23:31.397 { 00:23:31.397 "name": "BaseBdev1", 00:23:31.397 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:31.397 "is_configured": true, 00:23:31.397 "data_offset": 0, 00:23:31.397 "data_size": 65536 00:23:31.397 }, 00:23:31.397 { 00:23:31.397 "name": "BaseBdev2", 00:23:31.397 "uuid": "74de5ef6-8e06-4a5e-88c9-23df228926ff", 00:23:31.397 "is_configured": true, 00:23:31.397 "data_offset": 0, 00:23:31.397 "data_size": 65536 00:23:31.397 }, 00:23:31.397 { 00:23:31.397 "name": "BaseBdev3", 00:23:31.397 "uuid": "fdd29342-6f75-4ed8-afcf-bf2f7558d8d7", 00:23:31.397 "is_configured": true, 00:23:31.397 "data_offset": 0, 00:23:31.397 "data_size": 65536 00:23:31.397 }, 00:23:31.397 { 00:23:31.397 "name": "BaseBdev4", 00:23:31.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.397 "is_configured": false, 00:23:31.397 "data_offset": 0, 00:23:31.397 "data_size": 0 00:23:31.397 } 00:23:31.397 ] 00:23:31.397 }' 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.397 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.963 [2024-11-04 14:56:01.708677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:31.963 [2024-11-04 14:56:01.708811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:31.963 [2024-11-04 14:56:01.708827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:31.963 [2024-11-04 14:56:01.709202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:31.963 [2024-11-04 14:56:01.716214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:31.963 [2024-11-04 14:56:01.716262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:31.963 [2024-11-04 14:56:01.716620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.963 BaseBdev4 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.963 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.963 [ 00:23:31.964 { 00:23:31.964 "name": "BaseBdev4", 00:23:31.964 "aliases": [ 00:23:31.964 "ec64884e-bede-4b72-a238-b835a22c85d6" 00:23:31.964 ], 00:23:31.964 "product_name": "Malloc disk", 00:23:31.964 "block_size": 512, 00:23:31.964 "num_blocks": 65536, 00:23:31.964 "uuid": "ec64884e-bede-4b72-a238-b835a22c85d6", 00:23:31.964 "assigned_rate_limits": { 00:23:31.964 "rw_ios_per_sec": 0, 00:23:31.964 "rw_mbytes_per_sec": 0, 00:23:31.964 "r_mbytes_per_sec": 0, 00:23:31.964 "w_mbytes_per_sec": 0 00:23:31.964 }, 00:23:31.964 "claimed": true, 00:23:31.964 "claim_type": "exclusive_write", 00:23:31.964 "zoned": false, 00:23:31.964 "supported_io_types": { 00:23:31.964 "read": true, 00:23:31.964 "write": true, 00:23:31.964 "unmap": true, 00:23:31.964 "flush": true, 00:23:31.964 "reset": true, 00:23:31.964 "nvme_admin": false, 00:23:31.964 "nvme_io": false, 00:23:31.964 "nvme_io_md": false, 00:23:31.964 "write_zeroes": true, 00:23:31.964 "zcopy": true, 00:23:31.964 "get_zone_info": false, 00:23:31.964 "zone_management": false, 00:23:31.964 "zone_append": false, 00:23:31.964 "compare": false, 00:23:31.964 "compare_and_write": false, 00:23:31.964 "abort": true, 00:23:31.964 "seek_hole": false, 00:23:31.964 "seek_data": false, 00:23:31.964 "copy": true, 00:23:31.964 "nvme_iov_md": false 00:23:31.964 }, 00:23:31.964 "memory_domains": [ 00:23:31.964 { 00:23:31.964 "dma_device_id": "system", 00:23:31.964 "dma_device_type": 1 00:23:31.964 }, 00:23:31.964 { 00:23:31.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.964 "dma_device_type": 2 00:23:31.964 } 00:23:31.964 ], 00:23:31.964 "driver_specific": {} 00:23:31.964 } 00:23:31.964 ] 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.964 "name": "Existed_Raid", 00:23:31.964 "uuid": "4ccd20bb-a32b-4593-9952-7a89e81ca07e", 00:23:31.964 "strip_size_kb": 64, 00:23:31.964 "state": "online", 00:23:31.964 "raid_level": "raid5f", 00:23:31.964 "superblock": false, 00:23:31.964 "num_base_bdevs": 4, 00:23:31.964 "num_base_bdevs_discovered": 4, 00:23:31.964 "num_base_bdevs_operational": 4, 00:23:31.964 "base_bdevs_list": [ 00:23:31.964 { 00:23:31.964 "name": "BaseBdev1", 00:23:31.964 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:31.964 "is_configured": true, 00:23:31.964 "data_offset": 0, 00:23:31.964 "data_size": 65536 00:23:31.964 }, 00:23:31.964 { 00:23:31.964 "name": "BaseBdev2", 00:23:31.964 "uuid": "74de5ef6-8e06-4a5e-88c9-23df228926ff", 00:23:31.964 "is_configured": true, 00:23:31.964 "data_offset": 0, 00:23:31.964 "data_size": 65536 00:23:31.964 }, 00:23:31.964 { 00:23:31.964 "name": "BaseBdev3", 00:23:31.964 "uuid": "fdd29342-6f75-4ed8-afcf-bf2f7558d8d7", 00:23:31.964 "is_configured": true, 00:23:31.964 "data_offset": 0, 00:23:31.964 "data_size": 65536 00:23:31.964 }, 00:23:31.964 { 00:23:31.964 "name": "BaseBdev4", 00:23:31.964 "uuid": "ec64884e-bede-4b72-a238-b835a22c85d6", 00:23:31.964 "is_configured": true, 00:23:31.964 "data_offset": 0, 00:23:31.964 "data_size": 65536 00:23:31.964 } 00:23:31.964 ] 00:23:31.964 }' 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.964 14:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.530 [2024-11-04 14:56:02.277077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.530 "name": "Existed_Raid", 00:23:32.530 "aliases": [ 00:23:32.530 "4ccd20bb-a32b-4593-9952-7a89e81ca07e" 00:23:32.530 ], 00:23:32.530 "product_name": "Raid Volume", 00:23:32.530 "block_size": 512, 00:23:32.530 "num_blocks": 196608, 00:23:32.530 "uuid": "4ccd20bb-a32b-4593-9952-7a89e81ca07e", 00:23:32.530 "assigned_rate_limits": { 00:23:32.530 "rw_ios_per_sec": 0, 00:23:32.530 "rw_mbytes_per_sec": 0, 00:23:32.530 "r_mbytes_per_sec": 0, 00:23:32.530 "w_mbytes_per_sec": 0 00:23:32.530 }, 00:23:32.530 "claimed": false, 00:23:32.530 "zoned": false, 00:23:32.530 "supported_io_types": { 00:23:32.530 "read": true, 00:23:32.530 "write": true, 00:23:32.530 "unmap": false, 00:23:32.530 "flush": false, 00:23:32.530 "reset": true, 00:23:32.530 "nvme_admin": false, 00:23:32.530 "nvme_io": false, 00:23:32.530 "nvme_io_md": false, 00:23:32.530 "write_zeroes": true, 00:23:32.530 "zcopy": false, 00:23:32.530 "get_zone_info": false, 00:23:32.530 "zone_management": false, 00:23:32.530 "zone_append": false, 00:23:32.530 "compare": false, 00:23:32.530 "compare_and_write": false, 00:23:32.530 "abort": false, 00:23:32.530 "seek_hole": false, 00:23:32.530 "seek_data": false, 00:23:32.530 "copy": false, 00:23:32.530 "nvme_iov_md": false 00:23:32.530 }, 00:23:32.530 "driver_specific": { 00:23:32.530 "raid": { 00:23:32.530 "uuid": "4ccd20bb-a32b-4593-9952-7a89e81ca07e", 00:23:32.530 "strip_size_kb": 64, 00:23:32.530 "state": "online", 00:23:32.530 "raid_level": "raid5f", 00:23:32.530 "superblock": false, 00:23:32.530 "num_base_bdevs": 4, 00:23:32.530 "num_base_bdevs_discovered": 4, 00:23:32.530 "num_base_bdevs_operational": 4, 00:23:32.530 "base_bdevs_list": [ 00:23:32.530 { 00:23:32.530 "name": "BaseBdev1", 00:23:32.530 "uuid": "5f4e0da7-99e4-4407-91be-5c4d98d35be9", 00:23:32.530 "is_configured": true, 00:23:32.530 "data_offset": 0, 00:23:32.530 "data_size": 65536 00:23:32.530 }, 00:23:32.530 { 00:23:32.530 "name": "BaseBdev2", 00:23:32.530 "uuid": "74de5ef6-8e06-4a5e-88c9-23df228926ff", 00:23:32.530 "is_configured": true, 00:23:32.530 "data_offset": 0, 00:23:32.530 "data_size": 65536 00:23:32.530 }, 00:23:32.530 { 00:23:32.530 "name": "BaseBdev3", 00:23:32.530 "uuid": "fdd29342-6f75-4ed8-afcf-bf2f7558d8d7", 00:23:32.530 "is_configured": true, 00:23:32.530 "data_offset": 0, 00:23:32.530 "data_size": 65536 00:23:32.530 }, 00:23:32.530 { 00:23:32.530 "name": "BaseBdev4", 00:23:32.530 "uuid": "ec64884e-bede-4b72-a238-b835a22c85d6", 00:23:32.530 "is_configured": true, 00:23:32.530 "data_offset": 0, 00:23:32.530 "data_size": 65536 00:23:32.530 } 00:23:32.530 ] 00:23:32.530 } 00:23:32.530 } 00:23:32.530 }' 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:32.530 BaseBdev2 00:23:32.530 BaseBdev3 00:23:32.530 BaseBdev4' 00:23:32.530 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.788 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.788 [2024-11-04 14:56:02.640899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.046 "name": "Existed_Raid", 00:23:33.046 "uuid": "4ccd20bb-a32b-4593-9952-7a89e81ca07e", 00:23:33.046 "strip_size_kb": 64, 00:23:33.046 "state": "online", 00:23:33.046 "raid_level": "raid5f", 00:23:33.046 "superblock": false, 00:23:33.046 "num_base_bdevs": 4, 00:23:33.046 "num_base_bdevs_discovered": 3, 00:23:33.046 "num_base_bdevs_operational": 3, 00:23:33.046 "base_bdevs_list": [ 00:23:33.046 { 00:23:33.046 "name": null, 00:23:33.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.046 "is_configured": false, 00:23:33.046 "data_offset": 0, 00:23:33.046 "data_size": 65536 00:23:33.046 }, 00:23:33.046 { 00:23:33.046 "name": "BaseBdev2", 00:23:33.046 "uuid": "74de5ef6-8e06-4a5e-88c9-23df228926ff", 00:23:33.046 "is_configured": true, 00:23:33.046 "data_offset": 0, 00:23:33.046 "data_size": 65536 00:23:33.046 }, 00:23:33.046 { 00:23:33.046 "name": "BaseBdev3", 00:23:33.046 "uuid": "fdd29342-6f75-4ed8-afcf-bf2f7558d8d7", 00:23:33.046 "is_configured": true, 00:23:33.046 "data_offset": 0, 00:23:33.046 "data_size": 65536 00:23:33.046 }, 00:23:33.046 { 00:23:33.046 "name": "BaseBdev4", 00:23:33.046 "uuid": "ec64884e-bede-4b72-a238-b835a22c85d6", 00:23:33.046 "is_configured": true, 00:23:33.046 "data_offset": 0, 00:23:33.046 "data_size": 65536 00:23:33.046 } 00:23:33.046 ] 00:23:33.046 }' 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.046 14:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.611 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.612 [2024-11-04 14:56:03.268119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:33.612 [2024-11-04 14:56:03.268281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.612 [2024-11-04 14:56:03.361691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.612 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.612 [2024-11-04 14:56:03.421746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.870 [2024-11-04 14:56:03.581997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:33.870 [2024-11-04 14:56:03.582080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.870 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.150 BaseBdev2 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.150 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.150 [ 00:23:34.150 { 00:23:34.150 "name": "BaseBdev2", 00:23:34.150 "aliases": [ 00:23:34.150 "1532af9b-7b92-4241-b49d-02c7f2f80a59" 00:23:34.150 ], 00:23:34.150 "product_name": "Malloc disk", 00:23:34.150 "block_size": 512, 00:23:34.150 "num_blocks": 65536, 00:23:34.150 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:34.150 "assigned_rate_limits": { 00:23:34.150 "rw_ios_per_sec": 0, 00:23:34.150 "rw_mbytes_per_sec": 0, 00:23:34.150 "r_mbytes_per_sec": 0, 00:23:34.150 "w_mbytes_per_sec": 0 00:23:34.150 }, 00:23:34.150 "claimed": false, 00:23:34.150 "zoned": false, 00:23:34.150 "supported_io_types": { 00:23:34.150 "read": true, 00:23:34.150 "write": true, 00:23:34.151 "unmap": true, 00:23:34.151 "flush": true, 00:23:34.151 "reset": true, 00:23:34.151 "nvme_admin": false, 00:23:34.151 "nvme_io": false, 00:23:34.151 "nvme_io_md": false, 00:23:34.151 "write_zeroes": true, 00:23:34.151 "zcopy": true, 00:23:34.151 "get_zone_info": false, 00:23:34.151 "zone_management": false, 00:23:34.151 "zone_append": false, 00:23:34.151 "compare": false, 00:23:34.151 "compare_and_write": false, 00:23:34.151 "abort": true, 00:23:34.151 "seek_hole": false, 00:23:34.151 "seek_data": false, 00:23:34.151 "copy": true, 00:23:34.151 "nvme_iov_md": false 00:23:34.151 }, 00:23:34.151 "memory_domains": [ 00:23:34.151 { 00:23:34.151 "dma_device_id": "system", 00:23:34.151 "dma_device_type": 1 00:23:34.151 }, 00:23:34.151 { 00:23:34.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.151 "dma_device_type": 2 00:23:34.151 } 00:23:34.151 ], 00:23:34.151 "driver_specific": {} 00:23:34.151 } 00:23:34.151 ] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 BaseBdev3 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 [ 00:23:34.151 { 00:23:34.151 "name": "BaseBdev3", 00:23:34.151 "aliases": [ 00:23:34.151 "54af6d75-d829-4715-a6ff-6918f771c595" 00:23:34.151 ], 00:23:34.151 "product_name": "Malloc disk", 00:23:34.151 "block_size": 512, 00:23:34.151 "num_blocks": 65536, 00:23:34.151 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:34.151 "assigned_rate_limits": { 00:23:34.151 "rw_ios_per_sec": 0, 00:23:34.151 "rw_mbytes_per_sec": 0, 00:23:34.151 "r_mbytes_per_sec": 0, 00:23:34.151 "w_mbytes_per_sec": 0 00:23:34.151 }, 00:23:34.151 "claimed": false, 00:23:34.151 "zoned": false, 00:23:34.151 "supported_io_types": { 00:23:34.151 "read": true, 00:23:34.151 "write": true, 00:23:34.151 "unmap": true, 00:23:34.151 "flush": true, 00:23:34.151 "reset": true, 00:23:34.151 "nvme_admin": false, 00:23:34.151 "nvme_io": false, 00:23:34.151 "nvme_io_md": false, 00:23:34.151 "write_zeroes": true, 00:23:34.151 "zcopy": true, 00:23:34.151 "get_zone_info": false, 00:23:34.151 "zone_management": false, 00:23:34.151 "zone_append": false, 00:23:34.151 "compare": false, 00:23:34.151 "compare_and_write": false, 00:23:34.151 "abort": true, 00:23:34.151 "seek_hole": false, 00:23:34.151 "seek_data": false, 00:23:34.151 "copy": true, 00:23:34.151 "nvme_iov_md": false 00:23:34.151 }, 00:23:34.151 "memory_domains": [ 00:23:34.151 { 00:23:34.151 "dma_device_id": "system", 00:23:34.151 "dma_device_type": 1 00:23:34.151 }, 00:23:34.151 { 00:23:34.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.151 "dma_device_type": 2 00:23:34.151 } 00:23:34.151 ], 00:23:34.151 "driver_specific": {} 00:23:34.151 } 00:23:34.151 ] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 BaseBdev4 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 [ 00:23:34.151 { 00:23:34.151 "name": "BaseBdev4", 00:23:34.151 "aliases": [ 00:23:34.151 "29f89e9c-4a4e-470c-a7c5-fdc5463e7960" 00:23:34.151 ], 00:23:34.151 "product_name": "Malloc disk", 00:23:34.151 "block_size": 512, 00:23:34.151 "num_blocks": 65536, 00:23:34.151 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:34.151 "assigned_rate_limits": { 00:23:34.151 "rw_ios_per_sec": 0, 00:23:34.151 "rw_mbytes_per_sec": 0, 00:23:34.151 "r_mbytes_per_sec": 0, 00:23:34.151 "w_mbytes_per_sec": 0 00:23:34.151 }, 00:23:34.151 "claimed": false, 00:23:34.151 "zoned": false, 00:23:34.151 "supported_io_types": { 00:23:34.151 "read": true, 00:23:34.151 "write": true, 00:23:34.151 "unmap": true, 00:23:34.151 "flush": true, 00:23:34.151 "reset": true, 00:23:34.151 "nvme_admin": false, 00:23:34.151 "nvme_io": false, 00:23:34.151 "nvme_io_md": false, 00:23:34.151 "write_zeroes": true, 00:23:34.151 "zcopy": true, 00:23:34.151 "get_zone_info": false, 00:23:34.151 "zone_management": false, 00:23:34.151 "zone_append": false, 00:23:34.151 "compare": false, 00:23:34.151 "compare_and_write": false, 00:23:34.151 "abort": true, 00:23:34.151 "seek_hole": false, 00:23:34.151 "seek_data": false, 00:23:34.151 "copy": true, 00:23:34.151 "nvme_iov_md": false 00:23:34.151 }, 00:23:34.151 "memory_domains": [ 00:23:34.151 { 00:23:34.151 "dma_device_id": "system", 00:23:34.151 "dma_device_type": 1 00:23:34.151 }, 00:23:34.151 { 00:23:34.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.151 "dma_device_type": 2 00:23:34.151 } 00:23:34.151 ], 00:23:34.151 "driver_specific": {} 00:23:34.151 } 00:23:34.151 ] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.151 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 [2024-11-04 14:56:03.980505] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:34.151 [2024-11-04 14:56:03.980570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:34.151 [2024-11-04 14:56:03.980619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.151 [2024-11-04 14:56:03.983422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.151 [2024-11-04 14:56:03.983531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.152 14:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.152 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.409 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.409 "name": "Existed_Raid", 00:23:34.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.410 "strip_size_kb": 64, 00:23:34.410 "state": "configuring", 00:23:34.410 "raid_level": "raid5f", 00:23:34.410 "superblock": false, 00:23:34.410 "num_base_bdevs": 4, 00:23:34.410 "num_base_bdevs_discovered": 3, 00:23:34.410 "num_base_bdevs_operational": 4, 00:23:34.410 "base_bdevs_list": [ 00:23:34.410 { 00:23:34.410 "name": "BaseBdev1", 00:23:34.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.410 "is_configured": false, 00:23:34.410 "data_offset": 0, 00:23:34.410 "data_size": 0 00:23:34.410 }, 00:23:34.410 { 00:23:34.410 "name": "BaseBdev2", 00:23:34.410 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:34.410 "is_configured": true, 00:23:34.410 "data_offset": 0, 00:23:34.410 "data_size": 65536 00:23:34.410 }, 00:23:34.410 { 00:23:34.410 "name": "BaseBdev3", 00:23:34.410 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:34.410 "is_configured": true, 00:23:34.410 "data_offset": 0, 00:23:34.410 "data_size": 65536 00:23:34.410 }, 00:23:34.410 { 00:23:34.410 "name": "BaseBdev4", 00:23:34.410 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:34.410 "is_configured": true, 00:23:34.410 "data_offset": 0, 00:23:34.410 "data_size": 65536 00:23:34.410 } 00:23:34.410 ] 00:23:34.410 }' 00:23:34.410 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.410 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.668 [2024-11-04 14:56:04.536741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.668 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.926 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.926 "name": "Existed_Raid", 00:23:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.926 "strip_size_kb": 64, 00:23:34.926 "state": "configuring", 00:23:34.926 "raid_level": "raid5f", 00:23:34.926 "superblock": false, 00:23:34.926 "num_base_bdevs": 4, 00:23:34.926 "num_base_bdevs_discovered": 2, 00:23:34.926 "num_base_bdevs_operational": 4, 00:23:34.926 "base_bdevs_list": [ 00:23:34.926 { 00:23:34.926 "name": "BaseBdev1", 00:23:34.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.926 "is_configured": false, 00:23:34.926 "data_offset": 0, 00:23:34.926 "data_size": 0 00:23:34.926 }, 00:23:34.926 { 00:23:34.926 "name": null, 00:23:34.926 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:34.926 "is_configured": false, 00:23:34.926 "data_offset": 0, 00:23:34.926 "data_size": 65536 00:23:34.926 }, 00:23:34.926 { 00:23:34.926 "name": "BaseBdev3", 00:23:34.926 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:34.926 "is_configured": true, 00:23:34.926 "data_offset": 0, 00:23:34.926 "data_size": 65536 00:23:34.926 }, 00:23:34.926 { 00:23:34.926 "name": "BaseBdev4", 00:23:34.926 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:34.926 "is_configured": true, 00:23:34.926 "data_offset": 0, 00:23:34.926 "data_size": 65536 00:23:34.926 } 00:23:34.926 ] 00:23:34.926 }' 00:23:34.926 14:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.926 14:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.493 [2024-11-04 14:56:05.173397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.493 BaseBdev1 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.493 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.493 [ 00:23:35.493 { 00:23:35.493 "name": "BaseBdev1", 00:23:35.493 "aliases": [ 00:23:35.493 "97c353e8-d7a6-4754-8d02-0f65123436dd" 00:23:35.493 ], 00:23:35.493 "product_name": "Malloc disk", 00:23:35.493 "block_size": 512, 00:23:35.493 "num_blocks": 65536, 00:23:35.493 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:35.493 "assigned_rate_limits": { 00:23:35.493 "rw_ios_per_sec": 0, 00:23:35.493 "rw_mbytes_per_sec": 0, 00:23:35.493 "r_mbytes_per_sec": 0, 00:23:35.493 "w_mbytes_per_sec": 0 00:23:35.493 }, 00:23:35.493 "claimed": true, 00:23:35.493 "claim_type": "exclusive_write", 00:23:35.493 "zoned": false, 00:23:35.493 "supported_io_types": { 00:23:35.493 "read": true, 00:23:35.493 "write": true, 00:23:35.493 "unmap": true, 00:23:35.493 "flush": true, 00:23:35.493 "reset": true, 00:23:35.493 "nvme_admin": false, 00:23:35.493 "nvme_io": false, 00:23:35.493 "nvme_io_md": false, 00:23:35.493 "write_zeroes": true, 00:23:35.494 "zcopy": true, 00:23:35.494 "get_zone_info": false, 00:23:35.494 "zone_management": false, 00:23:35.494 "zone_append": false, 00:23:35.494 "compare": false, 00:23:35.494 "compare_and_write": false, 00:23:35.494 "abort": true, 00:23:35.494 "seek_hole": false, 00:23:35.494 "seek_data": false, 00:23:35.494 "copy": true, 00:23:35.494 "nvme_iov_md": false 00:23:35.494 }, 00:23:35.494 "memory_domains": [ 00:23:35.494 { 00:23:35.494 "dma_device_id": "system", 00:23:35.494 "dma_device_type": 1 00:23:35.494 }, 00:23:35.494 { 00:23:35.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.494 "dma_device_type": 2 00:23:35.494 } 00:23:35.494 ], 00:23:35.494 "driver_specific": {} 00:23:35.494 } 00:23:35.494 ] 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.494 "name": "Existed_Raid", 00:23:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.494 "strip_size_kb": 64, 00:23:35.494 "state": "configuring", 00:23:35.494 "raid_level": "raid5f", 00:23:35.494 "superblock": false, 00:23:35.494 "num_base_bdevs": 4, 00:23:35.494 "num_base_bdevs_discovered": 3, 00:23:35.494 "num_base_bdevs_operational": 4, 00:23:35.494 "base_bdevs_list": [ 00:23:35.494 { 00:23:35.494 "name": "BaseBdev1", 00:23:35.494 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:35.494 "is_configured": true, 00:23:35.494 "data_offset": 0, 00:23:35.494 "data_size": 65536 00:23:35.494 }, 00:23:35.494 { 00:23:35.494 "name": null, 00:23:35.494 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:35.494 "is_configured": false, 00:23:35.494 "data_offset": 0, 00:23:35.494 "data_size": 65536 00:23:35.494 }, 00:23:35.494 { 00:23:35.494 "name": "BaseBdev3", 00:23:35.494 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:35.494 "is_configured": true, 00:23:35.494 "data_offset": 0, 00:23:35.494 "data_size": 65536 00:23:35.494 }, 00:23:35.494 { 00:23:35.494 "name": "BaseBdev4", 00:23:35.494 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:35.494 "is_configured": true, 00:23:35.494 "data_offset": 0, 00:23:35.494 "data_size": 65536 00:23:35.494 } 00:23:35.494 ] 00:23:35.494 }' 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.494 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:36.059 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.060 [2024-11-04 14:56:05.761713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.060 "name": "Existed_Raid", 00:23:36.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.060 "strip_size_kb": 64, 00:23:36.060 "state": "configuring", 00:23:36.060 "raid_level": "raid5f", 00:23:36.060 "superblock": false, 00:23:36.060 "num_base_bdevs": 4, 00:23:36.060 "num_base_bdevs_discovered": 2, 00:23:36.060 "num_base_bdevs_operational": 4, 00:23:36.060 "base_bdevs_list": [ 00:23:36.060 { 00:23:36.060 "name": "BaseBdev1", 00:23:36.060 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:36.060 "is_configured": true, 00:23:36.060 "data_offset": 0, 00:23:36.060 "data_size": 65536 00:23:36.060 }, 00:23:36.060 { 00:23:36.060 "name": null, 00:23:36.060 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:36.060 "is_configured": false, 00:23:36.060 "data_offset": 0, 00:23:36.060 "data_size": 65536 00:23:36.060 }, 00:23:36.060 { 00:23:36.060 "name": null, 00:23:36.060 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:36.060 "is_configured": false, 00:23:36.060 "data_offset": 0, 00:23:36.060 "data_size": 65536 00:23:36.060 }, 00:23:36.060 { 00:23:36.060 "name": "BaseBdev4", 00:23:36.060 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:36.060 "is_configured": true, 00:23:36.060 "data_offset": 0, 00:23:36.060 "data_size": 65536 00:23:36.060 } 00:23:36.060 ] 00:23:36.060 }' 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.060 14:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.625 [2024-11-04 14:56:06.325827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.625 "name": "Existed_Raid", 00:23:36.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.625 "strip_size_kb": 64, 00:23:36.625 "state": "configuring", 00:23:36.625 "raid_level": "raid5f", 00:23:36.625 "superblock": false, 00:23:36.625 "num_base_bdevs": 4, 00:23:36.625 "num_base_bdevs_discovered": 3, 00:23:36.625 "num_base_bdevs_operational": 4, 00:23:36.625 "base_bdevs_list": [ 00:23:36.625 { 00:23:36.625 "name": "BaseBdev1", 00:23:36.625 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:36.625 "is_configured": true, 00:23:36.625 "data_offset": 0, 00:23:36.625 "data_size": 65536 00:23:36.625 }, 00:23:36.625 { 00:23:36.625 "name": null, 00:23:36.625 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:36.625 "is_configured": false, 00:23:36.625 "data_offset": 0, 00:23:36.625 "data_size": 65536 00:23:36.625 }, 00:23:36.625 { 00:23:36.625 "name": "BaseBdev3", 00:23:36.625 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:36.625 "is_configured": true, 00:23:36.625 "data_offset": 0, 00:23:36.625 "data_size": 65536 00:23:36.625 }, 00:23:36.625 { 00:23:36.625 "name": "BaseBdev4", 00:23:36.625 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:36.625 "is_configured": true, 00:23:36.625 "data_offset": 0, 00:23:36.625 "data_size": 65536 00:23:36.625 } 00:23:36.625 ] 00:23:36.625 }' 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.625 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.190 [2024-11-04 14:56:06.890020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.190 14:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.190 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.190 "name": "Existed_Raid", 00:23:37.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.190 "strip_size_kb": 64, 00:23:37.190 "state": "configuring", 00:23:37.190 "raid_level": "raid5f", 00:23:37.190 "superblock": false, 00:23:37.190 "num_base_bdevs": 4, 00:23:37.190 "num_base_bdevs_discovered": 2, 00:23:37.190 "num_base_bdevs_operational": 4, 00:23:37.190 "base_bdevs_list": [ 00:23:37.190 { 00:23:37.190 "name": null, 00:23:37.190 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:37.190 "is_configured": false, 00:23:37.190 "data_offset": 0, 00:23:37.190 "data_size": 65536 00:23:37.190 }, 00:23:37.190 { 00:23:37.190 "name": null, 00:23:37.190 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:37.190 "is_configured": false, 00:23:37.190 "data_offset": 0, 00:23:37.190 "data_size": 65536 00:23:37.190 }, 00:23:37.190 { 00:23:37.190 "name": "BaseBdev3", 00:23:37.190 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:37.190 "is_configured": true, 00:23:37.190 "data_offset": 0, 00:23:37.190 "data_size": 65536 00:23:37.190 }, 00:23:37.190 { 00:23:37.190 "name": "BaseBdev4", 00:23:37.190 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:37.190 "is_configured": true, 00:23:37.190 "data_offset": 0, 00:23:37.190 "data_size": 65536 00:23:37.190 } 00:23:37.190 ] 00:23:37.190 }' 00:23:37.190 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.190 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.756 [2024-11-04 14:56:07.593441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.756 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.014 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.014 "name": "Existed_Raid", 00:23:38.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.014 "strip_size_kb": 64, 00:23:38.014 "state": "configuring", 00:23:38.014 "raid_level": "raid5f", 00:23:38.014 "superblock": false, 00:23:38.014 "num_base_bdevs": 4, 00:23:38.014 "num_base_bdevs_discovered": 3, 00:23:38.014 "num_base_bdevs_operational": 4, 00:23:38.014 "base_bdevs_list": [ 00:23:38.014 { 00:23:38.014 "name": null, 00:23:38.014 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:38.014 "is_configured": false, 00:23:38.014 "data_offset": 0, 00:23:38.014 "data_size": 65536 00:23:38.014 }, 00:23:38.014 { 00:23:38.014 "name": "BaseBdev2", 00:23:38.014 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:38.014 "is_configured": true, 00:23:38.014 "data_offset": 0, 00:23:38.014 "data_size": 65536 00:23:38.014 }, 00:23:38.014 { 00:23:38.014 "name": "BaseBdev3", 00:23:38.014 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:38.014 "is_configured": true, 00:23:38.014 "data_offset": 0, 00:23:38.014 "data_size": 65536 00:23:38.014 }, 00:23:38.014 { 00:23:38.014 "name": "BaseBdev4", 00:23:38.014 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:38.014 "is_configured": true, 00:23:38.014 "data_offset": 0, 00:23:38.014 "data_size": 65536 00:23:38.014 } 00:23:38.014 ] 00:23:38.014 }' 00:23:38.014 14:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.014 14:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.272 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.272 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:38.272 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.272 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.272 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97c353e8-d7a6-4754-8d02-0f65123436dd 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.530 [2024-11-04 14:56:08.255372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:38.530 [2024-11-04 14:56:08.255461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:38.530 [2024-11-04 14:56:08.255475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:38.530 [2024-11-04 14:56:08.255839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:38.530 [2024-11-04 14:56:08.262463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:38.530 [2024-11-04 14:56:08.262496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:38.530 [2024-11-04 14:56:08.262855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.530 NewBaseBdev 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.530 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.531 [ 00:23:38.531 { 00:23:38.531 "name": "NewBaseBdev", 00:23:38.531 "aliases": [ 00:23:38.531 "97c353e8-d7a6-4754-8d02-0f65123436dd" 00:23:38.531 ], 00:23:38.531 "product_name": "Malloc disk", 00:23:38.531 "block_size": 512, 00:23:38.531 "num_blocks": 65536, 00:23:38.531 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:38.531 "assigned_rate_limits": { 00:23:38.531 "rw_ios_per_sec": 0, 00:23:38.531 "rw_mbytes_per_sec": 0, 00:23:38.531 "r_mbytes_per_sec": 0, 00:23:38.531 "w_mbytes_per_sec": 0 00:23:38.531 }, 00:23:38.531 "claimed": true, 00:23:38.531 "claim_type": "exclusive_write", 00:23:38.531 "zoned": false, 00:23:38.531 "supported_io_types": { 00:23:38.531 "read": true, 00:23:38.531 "write": true, 00:23:38.531 "unmap": true, 00:23:38.531 "flush": true, 00:23:38.531 "reset": true, 00:23:38.531 "nvme_admin": false, 00:23:38.531 "nvme_io": false, 00:23:38.531 "nvme_io_md": false, 00:23:38.531 "write_zeroes": true, 00:23:38.531 "zcopy": true, 00:23:38.531 "get_zone_info": false, 00:23:38.531 "zone_management": false, 00:23:38.531 "zone_append": false, 00:23:38.531 "compare": false, 00:23:38.531 "compare_and_write": false, 00:23:38.531 "abort": true, 00:23:38.531 "seek_hole": false, 00:23:38.531 "seek_data": false, 00:23:38.531 "copy": true, 00:23:38.531 "nvme_iov_md": false 00:23:38.531 }, 00:23:38.531 "memory_domains": [ 00:23:38.531 { 00:23:38.531 "dma_device_id": "system", 00:23:38.531 "dma_device_type": 1 00:23:38.531 }, 00:23:38.531 { 00:23:38.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.531 "dma_device_type": 2 00:23:38.531 } 00:23:38.531 ], 00:23:38.531 "driver_specific": {} 00:23:38.531 } 00:23:38.531 ] 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.531 "name": "Existed_Raid", 00:23:38.531 "uuid": "a13f5c25-5dfb-45c3-864a-ed107cfbc2b4", 00:23:38.531 "strip_size_kb": 64, 00:23:38.531 "state": "online", 00:23:38.531 "raid_level": "raid5f", 00:23:38.531 "superblock": false, 00:23:38.531 "num_base_bdevs": 4, 00:23:38.531 "num_base_bdevs_discovered": 4, 00:23:38.531 "num_base_bdevs_operational": 4, 00:23:38.531 "base_bdevs_list": [ 00:23:38.531 { 00:23:38.531 "name": "NewBaseBdev", 00:23:38.531 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:38.531 "is_configured": true, 00:23:38.531 "data_offset": 0, 00:23:38.531 "data_size": 65536 00:23:38.531 }, 00:23:38.531 { 00:23:38.531 "name": "BaseBdev2", 00:23:38.531 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:38.531 "is_configured": true, 00:23:38.531 "data_offset": 0, 00:23:38.531 "data_size": 65536 00:23:38.531 }, 00:23:38.531 { 00:23:38.531 "name": "BaseBdev3", 00:23:38.531 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:38.531 "is_configured": true, 00:23:38.531 "data_offset": 0, 00:23:38.531 "data_size": 65536 00:23:38.531 }, 00:23:38.531 { 00:23:38.531 "name": "BaseBdev4", 00:23:38.531 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:38.531 "is_configured": true, 00:23:38.531 "data_offset": 0, 00:23:38.531 "data_size": 65536 00:23:38.531 } 00:23:38.531 ] 00:23:38.531 }' 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.531 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.119 [2024-11-04 14:56:08.823326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.119 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.119 "name": "Existed_Raid", 00:23:39.119 "aliases": [ 00:23:39.119 "a13f5c25-5dfb-45c3-864a-ed107cfbc2b4" 00:23:39.119 ], 00:23:39.119 "product_name": "Raid Volume", 00:23:39.119 "block_size": 512, 00:23:39.119 "num_blocks": 196608, 00:23:39.119 "uuid": "a13f5c25-5dfb-45c3-864a-ed107cfbc2b4", 00:23:39.119 "assigned_rate_limits": { 00:23:39.119 "rw_ios_per_sec": 0, 00:23:39.119 "rw_mbytes_per_sec": 0, 00:23:39.119 "r_mbytes_per_sec": 0, 00:23:39.119 "w_mbytes_per_sec": 0 00:23:39.119 }, 00:23:39.119 "claimed": false, 00:23:39.119 "zoned": false, 00:23:39.119 "supported_io_types": { 00:23:39.119 "read": true, 00:23:39.119 "write": true, 00:23:39.119 "unmap": false, 00:23:39.119 "flush": false, 00:23:39.119 "reset": true, 00:23:39.119 "nvme_admin": false, 00:23:39.119 "nvme_io": false, 00:23:39.119 "nvme_io_md": false, 00:23:39.119 "write_zeroes": true, 00:23:39.119 "zcopy": false, 00:23:39.119 "get_zone_info": false, 00:23:39.119 "zone_management": false, 00:23:39.119 "zone_append": false, 00:23:39.119 "compare": false, 00:23:39.119 "compare_and_write": false, 00:23:39.120 "abort": false, 00:23:39.120 "seek_hole": false, 00:23:39.120 "seek_data": false, 00:23:39.120 "copy": false, 00:23:39.120 "nvme_iov_md": false 00:23:39.120 }, 00:23:39.120 "driver_specific": { 00:23:39.120 "raid": { 00:23:39.120 "uuid": "a13f5c25-5dfb-45c3-864a-ed107cfbc2b4", 00:23:39.120 "strip_size_kb": 64, 00:23:39.120 "state": "online", 00:23:39.120 "raid_level": "raid5f", 00:23:39.120 "superblock": false, 00:23:39.120 "num_base_bdevs": 4, 00:23:39.120 "num_base_bdevs_discovered": 4, 00:23:39.120 "num_base_bdevs_operational": 4, 00:23:39.120 "base_bdevs_list": [ 00:23:39.120 { 00:23:39.120 "name": "NewBaseBdev", 00:23:39.120 "uuid": "97c353e8-d7a6-4754-8d02-0f65123436dd", 00:23:39.120 "is_configured": true, 00:23:39.120 "data_offset": 0, 00:23:39.120 "data_size": 65536 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "name": "BaseBdev2", 00:23:39.120 "uuid": "1532af9b-7b92-4241-b49d-02c7f2f80a59", 00:23:39.120 "is_configured": true, 00:23:39.120 "data_offset": 0, 00:23:39.120 "data_size": 65536 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "name": "BaseBdev3", 00:23:39.120 "uuid": "54af6d75-d829-4715-a6ff-6918f771c595", 00:23:39.120 "is_configured": true, 00:23:39.120 "data_offset": 0, 00:23:39.120 "data_size": 65536 00:23:39.120 }, 00:23:39.120 { 00:23:39.120 "name": "BaseBdev4", 00:23:39.120 "uuid": "29f89e9c-4a4e-470c-a7c5-fdc5463e7960", 00:23:39.120 "is_configured": true, 00:23:39.120 "data_offset": 0, 00:23:39.120 "data_size": 65536 00:23:39.120 } 00:23:39.120 ] 00:23:39.120 } 00:23:39.120 } 00:23:39.120 }' 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:39.120 BaseBdev2 00:23:39.120 BaseBdev3 00:23:39.120 BaseBdev4' 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.120 14:56:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:39.378 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.379 [2024-11-04 14:56:09.159123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:39.379 [2024-11-04 14:56:09.159170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.379 [2024-11-04 14:56:09.159321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.379 [2024-11-04 14:56:09.159732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.379 [2024-11-04 14:56:09.159762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83310 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83310 ']' 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83310 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83310 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:39.379 killing process with pid 83310 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83310' 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83310 00:23:39.379 [2024-11-04 14:56:09.203878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:39.379 14:56:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83310 00:23:39.948 [2024-11-04 14:56:09.573646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.882 14:56:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:40.882 00:23:40.882 real 0m12.945s 00:23:40.882 user 0m21.193s 00:23:40.882 sys 0m2.079s 00:23:40.882 14:56:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:40.882 14:56:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.882 ************************************ 00:23:40.882 END TEST raid5f_state_function_test 00:23:40.882 ************************************ 00:23:40.882 14:56:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:40.882 14:56:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:40.882 14:56:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:40.882 14:56:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:40.882 ************************************ 00:23:40.882 START TEST raid5f_state_function_test_sb 00:23:40.882 ************************************ 00:23:40.882 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:23:40.882 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:40.882 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83993 00:23:40.883 Process raid pid: 83993 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83993' 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83993 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83993 ']' 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:40.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:40.883 14:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.141 [2024-11-04 14:56:10.799402] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:23:41.141 [2024-11-04 14:56:10.799599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.141 [2024-11-04 14:56:10.986429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.399 [2024-11-04 14:56:11.127561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.657 [2024-11-04 14:56:11.336308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.657 [2024-11-04 14:56:11.336373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.916 [2024-11-04 14:56:11.797197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:41.916 [2024-11-04 14:56:11.797286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:41.916 [2024-11-04 14:56:11.797303] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:41.916 [2024-11-04 14:56:11.797319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:41.916 [2024-11-04 14:56:11.797328] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:41.916 [2024-11-04 14:56:11.797341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:41.916 [2024-11-04 14:56:11.797354] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:41.916 [2024-11-04 14:56:11.797368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.916 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.174 "name": "Existed_Raid", 00:23:42.174 "uuid": "240581c6-172c-4bd7-b163-16de5f041e60", 00:23:42.174 "strip_size_kb": 64, 00:23:42.174 "state": "configuring", 00:23:42.174 "raid_level": "raid5f", 00:23:42.174 "superblock": true, 00:23:42.174 "num_base_bdevs": 4, 00:23:42.174 "num_base_bdevs_discovered": 0, 00:23:42.174 "num_base_bdevs_operational": 4, 00:23:42.174 "base_bdevs_list": [ 00:23:42.174 { 00:23:42.174 "name": "BaseBdev1", 00:23:42.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.174 "is_configured": false, 00:23:42.174 "data_offset": 0, 00:23:42.174 "data_size": 0 00:23:42.174 }, 00:23:42.174 { 00:23:42.174 "name": "BaseBdev2", 00:23:42.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.174 "is_configured": false, 00:23:42.174 "data_offset": 0, 00:23:42.174 "data_size": 0 00:23:42.174 }, 00:23:42.174 { 00:23:42.174 "name": "BaseBdev3", 00:23:42.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.174 "is_configured": false, 00:23:42.174 "data_offset": 0, 00:23:42.174 "data_size": 0 00:23:42.174 }, 00:23:42.174 { 00:23:42.174 "name": "BaseBdev4", 00:23:42.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.174 "is_configured": false, 00:23:42.174 "data_offset": 0, 00:23:42.174 "data_size": 0 00:23:42.174 } 00:23:42.174 ] 00:23:42.174 }' 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.174 14:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.432 [2024-11-04 14:56:12.297324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:42.432 [2024-11-04 14:56:12.297373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.432 [2024-11-04 14:56:12.305249] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:42.432 [2024-11-04 14:56:12.305321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:42.432 [2024-11-04 14:56:12.305336] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:42.432 [2024-11-04 14:56:12.305351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:42.432 [2024-11-04 14:56:12.305361] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:42.432 [2024-11-04 14:56:12.305375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:42.432 [2024-11-04 14:56:12.305384] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:42.432 [2024-11-04 14:56:12.305398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.432 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 [2024-11-04 14:56:12.351115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:42.690 BaseBdev1 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:42.690 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.691 [ 00:23:42.691 { 00:23:42.691 "name": "BaseBdev1", 00:23:42.691 "aliases": [ 00:23:42.691 "d57386d3-74e8-43b4-8cd3-ee00ec67f588" 00:23:42.691 ], 00:23:42.691 "product_name": "Malloc disk", 00:23:42.691 "block_size": 512, 00:23:42.691 "num_blocks": 65536, 00:23:42.691 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:42.691 "assigned_rate_limits": { 00:23:42.691 "rw_ios_per_sec": 0, 00:23:42.691 "rw_mbytes_per_sec": 0, 00:23:42.691 "r_mbytes_per_sec": 0, 00:23:42.691 "w_mbytes_per_sec": 0 00:23:42.691 }, 00:23:42.691 "claimed": true, 00:23:42.691 "claim_type": "exclusive_write", 00:23:42.691 "zoned": false, 00:23:42.691 "supported_io_types": { 00:23:42.691 "read": true, 00:23:42.691 "write": true, 00:23:42.691 "unmap": true, 00:23:42.691 "flush": true, 00:23:42.691 "reset": true, 00:23:42.691 "nvme_admin": false, 00:23:42.691 "nvme_io": false, 00:23:42.691 "nvme_io_md": false, 00:23:42.691 "write_zeroes": true, 00:23:42.691 "zcopy": true, 00:23:42.691 "get_zone_info": false, 00:23:42.691 "zone_management": false, 00:23:42.691 "zone_append": false, 00:23:42.691 "compare": false, 00:23:42.691 "compare_and_write": false, 00:23:42.691 "abort": true, 00:23:42.691 "seek_hole": false, 00:23:42.691 "seek_data": false, 00:23:42.691 "copy": true, 00:23:42.691 "nvme_iov_md": false 00:23:42.691 }, 00:23:42.691 "memory_domains": [ 00:23:42.691 { 00:23:42.691 "dma_device_id": "system", 00:23:42.691 "dma_device_type": 1 00:23:42.691 }, 00:23:42.691 { 00:23:42.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.691 "dma_device_type": 2 00:23:42.691 } 00:23:42.691 ], 00:23:42.691 "driver_specific": {} 00:23:42.691 } 00:23:42.691 ] 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.691 "name": "Existed_Raid", 00:23:42.691 "uuid": "9eb9048a-9999-4ed2-9383-8e99d7ef20cc", 00:23:42.691 "strip_size_kb": 64, 00:23:42.691 "state": "configuring", 00:23:42.691 "raid_level": "raid5f", 00:23:42.691 "superblock": true, 00:23:42.691 "num_base_bdevs": 4, 00:23:42.691 "num_base_bdevs_discovered": 1, 00:23:42.691 "num_base_bdevs_operational": 4, 00:23:42.691 "base_bdevs_list": [ 00:23:42.691 { 00:23:42.691 "name": "BaseBdev1", 00:23:42.691 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:42.691 "is_configured": true, 00:23:42.691 "data_offset": 2048, 00:23:42.691 "data_size": 63488 00:23:42.691 }, 00:23:42.691 { 00:23:42.691 "name": "BaseBdev2", 00:23:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.691 "is_configured": false, 00:23:42.691 "data_offset": 0, 00:23:42.691 "data_size": 0 00:23:42.691 }, 00:23:42.691 { 00:23:42.691 "name": "BaseBdev3", 00:23:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.691 "is_configured": false, 00:23:42.691 "data_offset": 0, 00:23:42.691 "data_size": 0 00:23:42.691 }, 00:23:42.691 { 00:23:42.691 "name": "BaseBdev4", 00:23:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.691 "is_configured": false, 00:23:42.691 "data_offset": 0, 00:23:42.691 "data_size": 0 00:23:42.691 } 00:23:42.691 ] 00:23:42.691 }' 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.691 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.257 [2024-11-04 14:56:12.887350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:43.257 [2024-11-04 14:56:12.887440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.257 [2024-11-04 14:56:12.895408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.257 [2024-11-04 14:56:12.898071] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:43.257 [2024-11-04 14:56:12.898122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:43.257 [2024-11-04 14:56:12.898138] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:43.257 [2024-11-04 14:56:12.898156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:43.257 [2024-11-04 14:56:12.898167] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:43.257 [2024-11-04 14:56:12.898181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.257 "name": "Existed_Raid", 00:23:43.257 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:43.257 "strip_size_kb": 64, 00:23:43.257 "state": "configuring", 00:23:43.257 "raid_level": "raid5f", 00:23:43.257 "superblock": true, 00:23:43.257 "num_base_bdevs": 4, 00:23:43.257 "num_base_bdevs_discovered": 1, 00:23:43.257 "num_base_bdevs_operational": 4, 00:23:43.257 "base_bdevs_list": [ 00:23:43.257 { 00:23:43.257 "name": "BaseBdev1", 00:23:43.257 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:43.257 "is_configured": true, 00:23:43.257 "data_offset": 2048, 00:23:43.257 "data_size": 63488 00:23:43.257 }, 00:23:43.257 { 00:23:43.257 "name": "BaseBdev2", 00:23:43.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.257 "is_configured": false, 00:23:43.257 "data_offset": 0, 00:23:43.257 "data_size": 0 00:23:43.257 }, 00:23:43.257 { 00:23:43.257 "name": "BaseBdev3", 00:23:43.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.257 "is_configured": false, 00:23:43.257 "data_offset": 0, 00:23:43.257 "data_size": 0 00:23:43.257 }, 00:23:43.257 { 00:23:43.257 "name": "BaseBdev4", 00:23:43.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.257 "is_configured": false, 00:23:43.257 "data_offset": 0, 00:23:43.257 "data_size": 0 00:23:43.257 } 00:23:43.257 ] 00:23:43.257 }' 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.257 14:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.825 [2024-11-04 14:56:13.462972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.825 BaseBdev2 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.825 [ 00:23:43.825 { 00:23:43.825 "name": "BaseBdev2", 00:23:43.825 "aliases": [ 00:23:43.825 "a0d1c544-2a39-4864-a5a2-39fb6c73e646" 00:23:43.825 ], 00:23:43.825 "product_name": "Malloc disk", 00:23:43.825 "block_size": 512, 00:23:43.825 "num_blocks": 65536, 00:23:43.825 "uuid": "a0d1c544-2a39-4864-a5a2-39fb6c73e646", 00:23:43.825 "assigned_rate_limits": { 00:23:43.825 "rw_ios_per_sec": 0, 00:23:43.825 "rw_mbytes_per_sec": 0, 00:23:43.825 "r_mbytes_per_sec": 0, 00:23:43.825 "w_mbytes_per_sec": 0 00:23:43.825 }, 00:23:43.825 "claimed": true, 00:23:43.825 "claim_type": "exclusive_write", 00:23:43.825 "zoned": false, 00:23:43.825 "supported_io_types": { 00:23:43.825 "read": true, 00:23:43.825 "write": true, 00:23:43.825 "unmap": true, 00:23:43.825 "flush": true, 00:23:43.825 "reset": true, 00:23:43.825 "nvme_admin": false, 00:23:43.825 "nvme_io": false, 00:23:43.825 "nvme_io_md": false, 00:23:43.825 "write_zeroes": true, 00:23:43.825 "zcopy": true, 00:23:43.825 "get_zone_info": false, 00:23:43.825 "zone_management": false, 00:23:43.825 "zone_append": false, 00:23:43.825 "compare": false, 00:23:43.825 "compare_and_write": false, 00:23:43.825 "abort": true, 00:23:43.825 "seek_hole": false, 00:23:43.825 "seek_data": false, 00:23:43.825 "copy": true, 00:23:43.825 "nvme_iov_md": false 00:23:43.825 }, 00:23:43.825 "memory_domains": [ 00:23:43.825 { 00:23:43.825 "dma_device_id": "system", 00:23:43.825 "dma_device_type": 1 00:23:43.825 }, 00:23:43.825 { 00:23:43.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.825 "dma_device_type": 2 00:23:43.825 } 00:23:43.825 ], 00:23:43.825 "driver_specific": {} 00:23:43.825 } 00:23:43.825 ] 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.825 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.826 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.826 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.826 "name": "Existed_Raid", 00:23:43.826 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:43.826 "strip_size_kb": 64, 00:23:43.826 "state": "configuring", 00:23:43.826 "raid_level": "raid5f", 00:23:43.826 "superblock": true, 00:23:43.826 "num_base_bdevs": 4, 00:23:43.826 "num_base_bdevs_discovered": 2, 00:23:43.826 "num_base_bdevs_operational": 4, 00:23:43.826 "base_bdevs_list": [ 00:23:43.826 { 00:23:43.826 "name": "BaseBdev1", 00:23:43.826 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:43.826 "is_configured": true, 00:23:43.826 "data_offset": 2048, 00:23:43.826 "data_size": 63488 00:23:43.826 }, 00:23:43.826 { 00:23:43.826 "name": "BaseBdev2", 00:23:43.826 "uuid": "a0d1c544-2a39-4864-a5a2-39fb6c73e646", 00:23:43.826 "is_configured": true, 00:23:43.826 "data_offset": 2048, 00:23:43.826 "data_size": 63488 00:23:43.826 }, 00:23:43.826 { 00:23:43.826 "name": "BaseBdev3", 00:23:43.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.826 "is_configured": false, 00:23:43.826 "data_offset": 0, 00:23:43.826 "data_size": 0 00:23:43.826 }, 00:23:43.826 { 00:23:43.826 "name": "BaseBdev4", 00:23:43.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.826 "is_configured": false, 00:23:43.826 "data_offset": 0, 00:23:43.826 "data_size": 0 00:23:43.826 } 00:23:43.826 ] 00:23:43.826 }' 00:23:43.826 14:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.826 14:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.392 [2024-11-04 14:56:14.057825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:44.392 BaseBdev3 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.392 [ 00:23:44.392 { 00:23:44.392 "name": "BaseBdev3", 00:23:44.392 "aliases": [ 00:23:44.392 "f7f63f60-c628-484b-8a07-9dbe10a2f946" 00:23:44.392 ], 00:23:44.392 "product_name": "Malloc disk", 00:23:44.392 "block_size": 512, 00:23:44.392 "num_blocks": 65536, 00:23:44.392 "uuid": "f7f63f60-c628-484b-8a07-9dbe10a2f946", 00:23:44.392 "assigned_rate_limits": { 00:23:44.392 "rw_ios_per_sec": 0, 00:23:44.392 "rw_mbytes_per_sec": 0, 00:23:44.392 "r_mbytes_per_sec": 0, 00:23:44.392 "w_mbytes_per_sec": 0 00:23:44.392 }, 00:23:44.392 "claimed": true, 00:23:44.392 "claim_type": "exclusive_write", 00:23:44.392 "zoned": false, 00:23:44.392 "supported_io_types": { 00:23:44.392 "read": true, 00:23:44.392 "write": true, 00:23:44.392 "unmap": true, 00:23:44.392 "flush": true, 00:23:44.392 "reset": true, 00:23:44.392 "nvme_admin": false, 00:23:44.392 "nvme_io": false, 00:23:44.392 "nvme_io_md": false, 00:23:44.392 "write_zeroes": true, 00:23:44.392 "zcopy": true, 00:23:44.392 "get_zone_info": false, 00:23:44.392 "zone_management": false, 00:23:44.392 "zone_append": false, 00:23:44.392 "compare": false, 00:23:44.392 "compare_and_write": false, 00:23:44.392 "abort": true, 00:23:44.392 "seek_hole": false, 00:23:44.392 "seek_data": false, 00:23:44.392 "copy": true, 00:23:44.392 "nvme_iov_md": false 00:23:44.392 }, 00:23:44.392 "memory_domains": [ 00:23:44.392 { 00:23:44.392 "dma_device_id": "system", 00:23:44.392 "dma_device_type": 1 00:23:44.392 }, 00:23:44.392 { 00:23:44.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.392 "dma_device_type": 2 00:23:44.392 } 00:23:44.392 ], 00:23:44.392 "driver_specific": {} 00:23:44.392 } 00:23:44.392 ] 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.392 "name": "Existed_Raid", 00:23:44.392 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:44.392 "strip_size_kb": 64, 00:23:44.392 "state": "configuring", 00:23:44.392 "raid_level": "raid5f", 00:23:44.392 "superblock": true, 00:23:44.392 "num_base_bdevs": 4, 00:23:44.392 "num_base_bdevs_discovered": 3, 00:23:44.392 "num_base_bdevs_operational": 4, 00:23:44.392 "base_bdevs_list": [ 00:23:44.392 { 00:23:44.392 "name": "BaseBdev1", 00:23:44.392 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:44.392 "is_configured": true, 00:23:44.392 "data_offset": 2048, 00:23:44.392 "data_size": 63488 00:23:44.392 }, 00:23:44.392 { 00:23:44.392 "name": "BaseBdev2", 00:23:44.392 "uuid": "a0d1c544-2a39-4864-a5a2-39fb6c73e646", 00:23:44.392 "is_configured": true, 00:23:44.392 "data_offset": 2048, 00:23:44.392 "data_size": 63488 00:23:44.392 }, 00:23:44.392 { 00:23:44.392 "name": "BaseBdev3", 00:23:44.392 "uuid": "f7f63f60-c628-484b-8a07-9dbe10a2f946", 00:23:44.392 "is_configured": true, 00:23:44.392 "data_offset": 2048, 00:23:44.392 "data_size": 63488 00:23:44.392 }, 00:23:44.392 { 00:23:44.392 "name": "BaseBdev4", 00:23:44.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.392 "is_configured": false, 00:23:44.392 "data_offset": 0, 00:23:44.392 "data_size": 0 00:23:44.392 } 00:23:44.392 ] 00:23:44.392 }' 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.392 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.959 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.960 [2024-11-04 14:56:14.645390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:44.960 [2024-11-04 14:56:14.645760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:44.960 [2024-11-04 14:56:14.645781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:44.960 BaseBdev4 00:23:44.960 [2024-11-04 14:56:14.646136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.960 [2024-11-04 14:56:14.653072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:44.960 [2024-11-04 14:56:14.653106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:44.960 [2024-11-04 14:56:14.653435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.960 [ 00:23:44.960 { 00:23:44.960 "name": "BaseBdev4", 00:23:44.960 "aliases": [ 00:23:44.960 "e5c97cf0-b135-469f-b5ad-dcfe04cfa8ee" 00:23:44.960 ], 00:23:44.960 "product_name": "Malloc disk", 00:23:44.960 "block_size": 512, 00:23:44.960 "num_blocks": 65536, 00:23:44.960 "uuid": "e5c97cf0-b135-469f-b5ad-dcfe04cfa8ee", 00:23:44.960 "assigned_rate_limits": { 00:23:44.960 "rw_ios_per_sec": 0, 00:23:44.960 "rw_mbytes_per_sec": 0, 00:23:44.960 "r_mbytes_per_sec": 0, 00:23:44.960 "w_mbytes_per_sec": 0 00:23:44.960 }, 00:23:44.960 "claimed": true, 00:23:44.960 "claim_type": "exclusive_write", 00:23:44.960 "zoned": false, 00:23:44.960 "supported_io_types": { 00:23:44.960 "read": true, 00:23:44.960 "write": true, 00:23:44.960 "unmap": true, 00:23:44.960 "flush": true, 00:23:44.960 "reset": true, 00:23:44.960 "nvme_admin": false, 00:23:44.960 "nvme_io": false, 00:23:44.960 "nvme_io_md": false, 00:23:44.960 "write_zeroes": true, 00:23:44.960 "zcopy": true, 00:23:44.960 "get_zone_info": false, 00:23:44.960 "zone_management": false, 00:23:44.960 "zone_append": false, 00:23:44.960 "compare": false, 00:23:44.960 "compare_and_write": false, 00:23:44.960 "abort": true, 00:23:44.960 "seek_hole": false, 00:23:44.960 "seek_data": false, 00:23:44.960 "copy": true, 00:23:44.960 "nvme_iov_md": false 00:23:44.960 }, 00:23:44.960 "memory_domains": [ 00:23:44.960 { 00:23:44.960 "dma_device_id": "system", 00:23:44.960 "dma_device_type": 1 00:23:44.960 }, 00:23:44.960 { 00:23:44.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.960 "dma_device_type": 2 00:23:44.960 } 00:23:44.960 ], 00:23:44.960 "driver_specific": {} 00:23:44.960 } 00:23:44.960 ] 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.960 "name": "Existed_Raid", 00:23:44.960 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:44.960 "strip_size_kb": 64, 00:23:44.960 "state": "online", 00:23:44.960 "raid_level": "raid5f", 00:23:44.960 "superblock": true, 00:23:44.960 "num_base_bdevs": 4, 00:23:44.960 "num_base_bdevs_discovered": 4, 00:23:44.960 "num_base_bdevs_operational": 4, 00:23:44.960 "base_bdevs_list": [ 00:23:44.960 { 00:23:44.960 "name": "BaseBdev1", 00:23:44.960 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:44.960 "is_configured": true, 00:23:44.960 "data_offset": 2048, 00:23:44.960 "data_size": 63488 00:23:44.960 }, 00:23:44.960 { 00:23:44.960 "name": "BaseBdev2", 00:23:44.960 "uuid": "a0d1c544-2a39-4864-a5a2-39fb6c73e646", 00:23:44.960 "is_configured": true, 00:23:44.960 "data_offset": 2048, 00:23:44.960 "data_size": 63488 00:23:44.960 }, 00:23:44.960 { 00:23:44.960 "name": "BaseBdev3", 00:23:44.960 "uuid": "f7f63f60-c628-484b-8a07-9dbe10a2f946", 00:23:44.960 "is_configured": true, 00:23:44.960 "data_offset": 2048, 00:23:44.960 "data_size": 63488 00:23:44.960 }, 00:23:44.960 { 00:23:44.960 "name": "BaseBdev4", 00:23:44.960 "uuid": "e5c97cf0-b135-469f-b5ad-dcfe04cfa8ee", 00:23:44.960 "is_configured": true, 00:23:44.960 "data_offset": 2048, 00:23:44.960 "data_size": 63488 00:23:44.960 } 00:23:44.960 ] 00:23:44.960 }' 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.960 14:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:45.556 [2024-11-04 14:56:15.197377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.556 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.556 "name": "Existed_Raid", 00:23:45.556 "aliases": [ 00:23:45.556 "9b293d13-884b-44d7-95bc-eb9c68e6577f" 00:23:45.556 ], 00:23:45.556 "product_name": "Raid Volume", 00:23:45.556 "block_size": 512, 00:23:45.556 "num_blocks": 190464, 00:23:45.556 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:45.556 "assigned_rate_limits": { 00:23:45.556 "rw_ios_per_sec": 0, 00:23:45.556 "rw_mbytes_per_sec": 0, 00:23:45.556 "r_mbytes_per_sec": 0, 00:23:45.556 "w_mbytes_per_sec": 0 00:23:45.556 }, 00:23:45.556 "claimed": false, 00:23:45.556 "zoned": false, 00:23:45.556 "supported_io_types": { 00:23:45.556 "read": true, 00:23:45.556 "write": true, 00:23:45.556 "unmap": false, 00:23:45.556 "flush": false, 00:23:45.556 "reset": true, 00:23:45.556 "nvme_admin": false, 00:23:45.556 "nvme_io": false, 00:23:45.556 "nvme_io_md": false, 00:23:45.556 "write_zeroes": true, 00:23:45.556 "zcopy": false, 00:23:45.556 "get_zone_info": false, 00:23:45.556 "zone_management": false, 00:23:45.556 "zone_append": false, 00:23:45.556 "compare": false, 00:23:45.556 "compare_and_write": false, 00:23:45.556 "abort": false, 00:23:45.556 "seek_hole": false, 00:23:45.556 "seek_data": false, 00:23:45.556 "copy": false, 00:23:45.556 "nvme_iov_md": false 00:23:45.556 }, 00:23:45.556 "driver_specific": { 00:23:45.556 "raid": { 00:23:45.556 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:45.556 "strip_size_kb": 64, 00:23:45.556 "state": "online", 00:23:45.556 "raid_level": "raid5f", 00:23:45.556 "superblock": true, 00:23:45.556 "num_base_bdevs": 4, 00:23:45.556 "num_base_bdevs_discovered": 4, 00:23:45.556 "num_base_bdevs_operational": 4, 00:23:45.556 "base_bdevs_list": [ 00:23:45.556 { 00:23:45.556 "name": "BaseBdev1", 00:23:45.556 "uuid": "d57386d3-74e8-43b4-8cd3-ee00ec67f588", 00:23:45.556 "is_configured": true, 00:23:45.556 "data_offset": 2048, 00:23:45.557 "data_size": 63488 00:23:45.557 }, 00:23:45.557 { 00:23:45.557 "name": "BaseBdev2", 00:23:45.557 "uuid": "a0d1c544-2a39-4864-a5a2-39fb6c73e646", 00:23:45.557 "is_configured": true, 00:23:45.557 "data_offset": 2048, 00:23:45.557 "data_size": 63488 00:23:45.557 }, 00:23:45.557 { 00:23:45.557 "name": "BaseBdev3", 00:23:45.557 "uuid": "f7f63f60-c628-484b-8a07-9dbe10a2f946", 00:23:45.557 "is_configured": true, 00:23:45.557 "data_offset": 2048, 00:23:45.557 "data_size": 63488 00:23:45.557 }, 00:23:45.557 { 00:23:45.557 "name": "BaseBdev4", 00:23:45.557 "uuid": "e5c97cf0-b135-469f-b5ad-dcfe04cfa8ee", 00:23:45.557 "is_configured": true, 00:23:45.557 "data_offset": 2048, 00:23:45.557 "data_size": 63488 00:23:45.557 } 00:23:45.557 ] 00:23:45.557 } 00:23:45.557 } 00:23:45.557 }' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:45.557 BaseBdev2 00:23:45.557 BaseBdev3 00:23:45.557 BaseBdev4' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.557 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.815 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.816 [2024-11-04 14:56:15.577270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.816 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.073 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.073 "name": "Existed_Raid", 00:23:46.073 "uuid": "9b293d13-884b-44d7-95bc-eb9c68e6577f", 00:23:46.073 "strip_size_kb": 64, 00:23:46.073 "state": "online", 00:23:46.073 "raid_level": "raid5f", 00:23:46.073 "superblock": true, 00:23:46.073 "num_base_bdevs": 4, 00:23:46.073 "num_base_bdevs_discovered": 3, 00:23:46.073 "num_base_bdevs_operational": 3, 00:23:46.073 "base_bdevs_list": [ 00:23:46.073 { 00:23:46.073 "name": null, 00:23:46.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.073 "is_configured": false, 00:23:46.073 "data_offset": 0, 00:23:46.073 "data_size": 63488 00:23:46.073 }, 00:23:46.073 { 00:23:46.073 "name": "BaseBdev2", 00:23:46.073 "uuid": "a0d1c544-2a39-4864-a5a2-39fb6c73e646", 00:23:46.073 "is_configured": true, 00:23:46.073 "data_offset": 2048, 00:23:46.073 "data_size": 63488 00:23:46.073 }, 00:23:46.073 { 00:23:46.073 "name": "BaseBdev3", 00:23:46.073 "uuid": "f7f63f60-c628-484b-8a07-9dbe10a2f946", 00:23:46.073 "is_configured": true, 00:23:46.073 "data_offset": 2048, 00:23:46.073 "data_size": 63488 00:23:46.073 }, 00:23:46.073 { 00:23:46.073 "name": "BaseBdev4", 00:23:46.073 "uuid": "e5c97cf0-b135-469f-b5ad-dcfe04cfa8ee", 00:23:46.073 "is_configured": true, 00:23:46.073 "data_offset": 2048, 00:23:46.073 "data_size": 63488 00:23:46.073 } 00:23:46.073 ] 00:23:46.073 }' 00:23:46.073 14:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.073 14:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.331 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.589 [2024-11-04 14:56:16.249748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:46.589 [2024-11-04 14:56:16.250297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.589 [2024-11-04 14:56:16.335544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.589 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.589 [2024-11-04 14:56:16.391607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.849 [2024-11-04 14:56:16.544832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:46.849 [2024-11-04 14:56:16.545628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.849 BaseBdev2 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.849 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.108 [ 00:23:47.108 { 00:23:47.108 "name": "BaseBdev2", 00:23:47.108 "aliases": [ 00:23:47.108 "45c5976f-d733-454d-9abf-8ab350d81774" 00:23:47.108 ], 00:23:47.108 "product_name": "Malloc disk", 00:23:47.108 "block_size": 512, 00:23:47.108 "num_blocks": 65536, 00:23:47.108 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:47.108 "assigned_rate_limits": { 00:23:47.108 "rw_ios_per_sec": 0, 00:23:47.108 "rw_mbytes_per_sec": 0, 00:23:47.108 "r_mbytes_per_sec": 0, 00:23:47.108 "w_mbytes_per_sec": 0 00:23:47.108 }, 00:23:47.108 "claimed": false, 00:23:47.108 "zoned": false, 00:23:47.108 "supported_io_types": { 00:23:47.108 "read": true, 00:23:47.108 "write": true, 00:23:47.108 "unmap": true, 00:23:47.108 "flush": true, 00:23:47.108 "reset": true, 00:23:47.108 "nvme_admin": false, 00:23:47.108 "nvme_io": false, 00:23:47.108 "nvme_io_md": false, 00:23:47.108 "write_zeroes": true, 00:23:47.108 "zcopy": true, 00:23:47.108 "get_zone_info": false, 00:23:47.108 "zone_management": false, 00:23:47.108 "zone_append": false, 00:23:47.108 "compare": false, 00:23:47.108 "compare_and_write": false, 00:23:47.108 "abort": true, 00:23:47.108 "seek_hole": false, 00:23:47.108 "seek_data": false, 00:23:47.108 "copy": true, 00:23:47.108 "nvme_iov_md": false 00:23:47.108 }, 00:23:47.108 "memory_domains": [ 00:23:47.108 { 00:23:47.108 "dma_device_id": "system", 00:23:47.108 "dma_device_type": 1 00:23:47.108 }, 00:23:47.108 { 00:23:47.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.108 "dma_device_type": 2 00:23:47.108 } 00:23:47.108 ], 00:23:47.108 "driver_specific": {} 00:23:47.108 } 00:23:47.108 ] 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.108 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 BaseBdev3 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 [ 00:23:47.109 { 00:23:47.109 "name": "BaseBdev3", 00:23:47.109 "aliases": [ 00:23:47.109 "6b9875c3-3328-4a52-93b6-652ac8c70c5a" 00:23:47.109 ], 00:23:47.109 "product_name": "Malloc disk", 00:23:47.109 "block_size": 512, 00:23:47.109 "num_blocks": 65536, 00:23:47.109 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:47.109 "assigned_rate_limits": { 00:23:47.109 "rw_ios_per_sec": 0, 00:23:47.109 "rw_mbytes_per_sec": 0, 00:23:47.109 "r_mbytes_per_sec": 0, 00:23:47.109 "w_mbytes_per_sec": 0 00:23:47.109 }, 00:23:47.109 "claimed": false, 00:23:47.109 "zoned": false, 00:23:47.109 "supported_io_types": { 00:23:47.109 "read": true, 00:23:47.109 "write": true, 00:23:47.109 "unmap": true, 00:23:47.109 "flush": true, 00:23:47.109 "reset": true, 00:23:47.109 "nvme_admin": false, 00:23:47.109 "nvme_io": false, 00:23:47.109 "nvme_io_md": false, 00:23:47.109 "write_zeroes": true, 00:23:47.109 "zcopy": true, 00:23:47.109 "get_zone_info": false, 00:23:47.109 "zone_management": false, 00:23:47.109 "zone_append": false, 00:23:47.109 "compare": false, 00:23:47.109 "compare_and_write": false, 00:23:47.109 "abort": true, 00:23:47.109 "seek_hole": false, 00:23:47.109 "seek_data": false, 00:23:47.109 "copy": true, 00:23:47.109 "nvme_iov_md": false 00:23:47.109 }, 00:23:47.109 "memory_domains": [ 00:23:47.109 { 00:23:47.109 "dma_device_id": "system", 00:23:47.109 "dma_device_type": 1 00:23:47.109 }, 00:23:47.109 { 00:23:47.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.109 "dma_device_type": 2 00:23:47.109 } 00:23:47.109 ], 00:23:47.109 "driver_specific": {} 00:23:47.109 } 00:23:47.109 ] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 BaseBdev4 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 [ 00:23:47.109 { 00:23:47.109 "name": "BaseBdev4", 00:23:47.109 "aliases": [ 00:23:47.109 "fa64f161-6f8c-43b4-aec8-a186ce1e36dc" 00:23:47.109 ], 00:23:47.109 "product_name": "Malloc disk", 00:23:47.109 "block_size": 512, 00:23:47.109 "num_blocks": 65536, 00:23:47.109 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:47.109 "assigned_rate_limits": { 00:23:47.109 "rw_ios_per_sec": 0, 00:23:47.109 "rw_mbytes_per_sec": 0, 00:23:47.109 "r_mbytes_per_sec": 0, 00:23:47.109 "w_mbytes_per_sec": 0 00:23:47.109 }, 00:23:47.109 "claimed": false, 00:23:47.109 "zoned": false, 00:23:47.109 "supported_io_types": { 00:23:47.109 "read": true, 00:23:47.109 "write": true, 00:23:47.109 "unmap": true, 00:23:47.109 "flush": true, 00:23:47.109 "reset": true, 00:23:47.109 "nvme_admin": false, 00:23:47.109 "nvme_io": false, 00:23:47.109 "nvme_io_md": false, 00:23:47.109 "write_zeroes": true, 00:23:47.109 "zcopy": true, 00:23:47.109 "get_zone_info": false, 00:23:47.109 "zone_management": false, 00:23:47.109 "zone_append": false, 00:23:47.109 "compare": false, 00:23:47.109 "compare_and_write": false, 00:23:47.109 "abort": true, 00:23:47.109 "seek_hole": false, 00:23:47.109 "seek_data": false, 00:23:47.109 "copy": true, 00:23:47.109 "nvme_iov_md": false 00:23:47.109 }, 00:23:47.109 "memory_domains": [ 00:23:47.109 { 00:23:47.109 "dma_device_id": "system", 00:23:47.109 "dma_device_type": 1 00:23:47.109 }, 00:23:47.109 { 00:23:47.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.109 "dma_device_type": 2 00:23:47.109 } 00:23:47.109 ], 00:23:47.109 "driver_specific": {} 00:23:47.109 } 00:23:47.109 ] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 [2024-11-04 14:56:16.917207] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:47.109 [2024-11-04 14:56:16.917294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:47.109 [2024-11-04 14:56:16.917328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.109 [2024-11-04 14:56:16.919803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:47.109 [2024-11-04 14:56:16.919883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.109 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.109 "name": "Existed_Raid", 00:23:47.109 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:47.109 "strip_size_kb": 64, 00:23:47.109 "state": "configuring", 00:23:47.109 "raid_level": "raid5f", 00:23:47.109 "superblock": true, 00:23:47.109 "num_base_bdevs": 4, 00:23:47.109 "num_base_bdevs_discovered": 3, 00:23:47.109 "num_base_bdevs_operational": 4, 00:23:47.109 "base_bdevs_list": [ 00:23:47.109 { 00:23:47.110 "name": "BaseBdev1", 00:23:47.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.110 "is_configured": false, 00:23:47.110 "data_offset": 0, 00:23:47.110 "data_size": 0 00:23:47.110 }, 00:23:47.110 { 00:23:47.110 "name": "BaseBdev2", 00:23:47.110 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:47.110 "is_configured": true, 00:23:47.110 "data_offset": 2048, 00:23:47.110 "data_size": 63488 00:23:47.110 }, 00:23:47.110 { 00:23:47.110 "name": "BaseBdev3", 00:23:47.110 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:47.110 "is_configured": true, 00:23:47.110 "data_offset": 2048, 00:23:47.110 "data_size": 63488 00:23:47.110 }, 00:23:47.110 { 00:23:47.110 "name": "BaseBdev4", 00:23:47.110 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:47.110 "is_configured": true, 00:23:47.110 "data_offset": 2048, 00:23:47.110 "data_size": 63488 00:23:47.110 } 00:23:47.110 ] 00:23:47.110 }' 00:23:47.110 14:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.110 14:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.676 [2024-11-04 14:56:17.425410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.676 "name": "Existed_Raid", 00:23:47.676 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:47.676 "strip_size_kb": 64, 00:23:47.676 "state": "configuring", 00:23:47.676 "raid_level": "raid5f", 00:23:47.676 "superblock": true, 00:23:47.676 "num_base_bdevs": 4, 00:23:47.676 "num_base_bdevs_discovered": 2, 00:23:47.676 "num_base_bdevs_operational": 4, 00:23:47.676 "base_bdevs_list": [ 00:23:47.676 { 00:23:47.676 "name": "BaseBdev1", 00:23:47.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.676 "is_configured": false, 00:23:47.676 "data_offset": 0, 00:23:47.676 "data_size": 0 00:23:47.676 }, 00:23:47.676 { 00:23:47.676 "name": null, 00:23:47.676 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:47.676 "is_configured": false, 00:23:47.676 "data_offset": 0, 00:23:47.676 "data_size": 63488 00:23:47.676 }, 00:23:47.676 { 00:23:47.676 "name": "BaseBdev3", 00:23:47.676 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:47.676 "is_configured": true, 00:23:47.676 "data_offset": 2048, 00:23:47.676 "data_size": 63488 00:23:47.676 }, 00:23:47.676 { 00:23:47.676 "name": "BaseBdev4", 00:23:47.676 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:47.676 "is_configured": true, 00:23:47.676 "data_offset": 2048, 00:23:47.676 "data_size": 63488 00:23:47.676 } 00:23:47.676 ] 00:23:47.676 }' 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.676 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.242 14:56:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.242 [2024-11-04 14:56:18.032835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.242 BaseBdev1 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:48.242 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.243 [ 00:23:48.243 { 00:23:48.243 "name": "BaseBdev1", 00:23:48.243 "aliases": [ 00:23:48.243 "843799fd-8921-40d1-88d3-36df8474edbb" 00:23:48.243 ], 00:23:48.243 "product_name": "Malloc disk", 00:23:48.243 "block_size": 512, 00:23:48.243 "num_blocks": 65536, 00:23:48.243 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:48.243 "assigned_rate_limits": { 00:23:48.243 "rw_ios_per_sec": 0, 00:23:48.243 "rw_mbytes_per_sec": 0, 00:23:48.243 "r_mbytes_per_sec": 0, 00:23:48.243 "w_mbytes_per_sec": 0 00:23:48.243 }, 00:23:48.243 "claimed": true, 00:23:48.243 "claim_type": "exclusive_write", 00:23:48.243 "zoned": false, 00:23:48.243 "supported_io_types": { 00:23:48.243 "read": true, 00:23:48.243 "write": true, 00:23:48.243 "unmap": true, 00:23:48.243 "flush": true, 00:23:48.243 "reset": true, 00:23:48.243 "nvme_admin": false, 00:23:48.243 "nvme_io": false, 00:23:48.243 "nvme_io_md": false, 00:23:48.243 "write_zeroes": true, 00:23:48.243 "zcopy": true, 00:23:48.243 "get_zone_info": false, 00:23:48.243 "zone_management": false, 00:23:48.243 "zone_append": false, 00:23:48.243 "compare": false, 00:23:48.243 "compare_and_write": false, 00:23:48.243 "abort": true, 00:23:48.243 "seek_hole": false, 00:23:48.243 "seek_data": false, 00:23:48.243 "copy": true, 00:23:48.243 "nvme_iov_md": false 00:23:48.243 }, 00:23:48.243 "memory_domains": [ 00:23:48.243 { 00:23:48.243 "dma_device_id": "system", 00:23:48.243 "dma_device_type": 1 00:23:48.243 }, 00:23:48.243 { 00:23:48.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.243 "dma_device_type": 2 00:23:48.243 } 00:23:48.243 ], 00:23:48.243 "driver_specific": {} 00:23:48.243 } 00:23:48.243 ] 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.243 "name": "Existed_Raid", 00:23:48.243 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:48.243 "strip_size_kb": 64, 00:23:48.243 "state": "configuring", 00:23:48.243 "raid_level": "raid5f", 00:23:48.243 "superblock": true, 00:23:48.243 "num_base_bdevs": 4, 00:23:48.243 "num_base_bdevs_discovered": 3, 00:23:48.243 "num_base_bdevs_operational": 4, 00:23:48.243 "base_bdevs_list": [ 00:23:48.243 { 00:23:48.243 "name": "BaseBdev1", 00:23:48.243 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:48.243 "is_configured": true, 00:23:48.243 "data_offset": 2048, 00:23:48.243 "data_size": 63488 00:23:48.243 }, 00:23:48.243 { 00:23:48.243 "name": null, 00:23:48.243 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:48.243 "is_configured": false, 00:23:48.243 "data_offset": 0, 00:23:48.243 "data_size": 63488 00:23:48.243 }, 00:23:48.243 { 00:23:48.243 "name": "BaseBdev3", 00:23:48.243 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:48.243 "is_configured": true, 00:23:48.243 "data_offset": 2048, 00:23:48.243 "data_size": 63488 00:23:48.243 }, 00:23:48.243 { 00:23:48.243 "name": "BaseBdev4", 00:23:48.243 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:48.243 "is_configured": true, 00:23:48.243 "data_offset": 2048, 00:23:48.243 "data_size": 63488 00:23:48.243 } 00:23:48.243 ] 00:23:48.243 }' 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.243 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.809 [2024-11-04 14:56:18.649151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.809 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.810 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.068 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.068 "name": "Existed_Raid", 00:23:49.068 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:49.068 "strip_size_kb": 64, 00:23:49.068 "state": "configuring", 00:23:49.068 "raid_level": "raid5f", 00:23:49.068 "superblock": true, 00:23:49.068 "num_base_bdevs": 4, 00:23:49.068 "num_base_bdevs_discovered": 2, 00:23:49.068 "num_base_bdevs_operational": 4, 00:23:49.068 "base_bdevs_list": [ 00:23:49.068 { 00:23:49.068 "name": "BaseBdev1", 00:23:49.068 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:49.068 "is_configured": true, 00:23:49.068 "data_offset": 2048, 00:23:49.068 "data_size": 63488 00:23:49.068 }, 00:23:49.068 { 00:23:49.068 "name": null, 00:23:49.068 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:49.068 "is_configured": false, 00:23:49.068 "data_offset": 0, 00:23:49.068 "data_size": 63488 00:23:49.068 }, 00:23:49.068 { 00:23:49.068 "name": null, 00:23:49.068 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:49.068 "is_configured": false, 00:23:49.068 "data_offset": 0, 00:23:49.068 "data_size": 63488 00:23:49.068 }, 00:23:49.068 { 00:23:49.068 "name": "BaseBdev4", 00:23:49.068 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:49.068 "is_configured": true, 00:23:49.068 "data_offset": 2048, 00:23:49.068 "data_size": 63488 00:23:49.068 } 00:23:49.068 ] 00:23:49.068 }' 00:23:49.068 14:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.068 14:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.327 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.327 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:49.327 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.327 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.327 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.585 [2024-11-04 14:56:19.241339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.585 "name": "Existed_Raid", 00:23:49.585 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:49.585 "strip_size_kb": 64, 00:23:49.585 "state": "configuring", 00:23:49.585 "raid_level": "raid5f", 00:23:49.585 "superblock": true, 00:23:49.585 "num_base_bdevs": 4, 00:23:49.585 "num_base_bdevs_discovered": 3, 00:23:49.585 "num_base_bdevs_operational": 4, 00:23:49.585 "base_bdevs_list": [ 00:23:49.585 { 00:23:49.585 "name": "BaseBdev1", 00:23:49.585 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:49.585 "is_configured": true, 00:23:49.585 "data_offset": 2048, 00:23:49.585 "data_size": 63488 00:23:49.585 }, 00:23:49.585 { 00:23:49.585 "name": null, 00:23:49.585 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:49.585 "is_configured": false, 00:23:49.585 "data_offset": 0, 00:23:49.585 "data_size": 63488 00:23:49.585 }, 00:23:49.585 { 00:23:49.585 "name": "BaseBdev3", 00:23:49.585 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:49.585 "is_configured": true, 00:23:49.585 "data_offset": 2048, 00:23:49.585 "data_size": 63488 00:23:49.585 }, 00:23:49.585 { 00:23:49.585 "name": "BaseBdev4", 00:23:49.585 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:49.585 "is_configured": true, 00:23:49.585 "data_offset": 2048, 00:23:49.585 "data_size": 63488 00:23:49.585 } 00:23:49.585 ] 00:23:49.585 }' 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.585 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:50.151 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.152 [2024-11-04 14:56:19.853533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.152 "name": "Existed_Raid", 00:23:50.152 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:50.152 "strip_size_kb": 64, 00:23:50.152 "state": "configuring", 00:23:50.152 "raid_level": "raid5f", 00:23:50.152 "superblock": true, 00:23:50.152 "num_base_bdevs": 4, 00:23:50.152 "num_base_bdevs_discovered": 2, 00:23:50.152 "num_base_bdevs_operational": 4, 00:23:50.152 "base_bdevs_list": [ 00:23:50.152 { 00:23:50.152 "name": null, 00:23:50.152 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:50.152 "is_configured": false, 00:23:50.152 "data_offset": 0, 00:23:50.152 "data_size": 63488 00:23:50.152 }, 00:23:50.152 { 00:23:50.152 "name": null, 00:23:50.152 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:50.152 "is_configured": false, 00:23:50.152 "data_offset": 0, 00:23:50.152 "data_size": 63488 00:23:50.152 }, 00:23:50.152 { 00:23:50.152 "name": "BaseBdev3", 00:23:50.152 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:50.152 "is_configured": true, 00:23:50.152 "data_offset": 2048, 00:23:50.152 "data_size": 63488 00:23:50.152 }, 00:23:50.152 { 00:23:50.152 "name": "BaseBdev4", 00:23:50.152 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:50.152 "is_configured": true, 00:23:50.152 "data_offset": 2048, 00:23:50.152 "data_size": 63488 00:23:50.152 } 00:23:50.152 ] 00:23:50.152 }' 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.152 14:56:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.718 [2024-11-04 14:56:20.529694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.718 "name": "Existed_Raid", 00:23:50.718 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:50.718 "strip_size_kb": 64, 00:23:50.718 "state": "configuring", 00:23:50.718 "raid_level": "raid5f", 00:23:50.718 "superblock": true, 00:23:50.718 "num_base_bdevs": 4, 00:23:50.718 "num_base_bdevs_discovered": 3, 00:23:50.718 "num_base_bdevs_operational": 4, 00:23:50.718 "base_bdevs_list": [ 00:23:50.718 { 00:23:50.718 "name": null, 00:23:50.718 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:50.718 "is_configured": false, 00:23:50.718 "data_offset": 0, 00:23:50.718 "data_size": 63488 00:23:50.718 }, 00:23:50.718 { 00:23:50.718 "name": "BaseBdev2", 00:23:50.718 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:50.718 "is_configured": true, 00:23:50.718 "data_offset": 2048, 00:23:50.718 "data_size": 63488 00:23:50.718 }, 00:23:50.718 { 00:23:50.718 "name": "BaseBdev3", 00:23:50.718 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:50.718 "is_configured": true, 00:23:50.718 "data_offset": 2048, 00:23:50.718 "data_size": 63488 00:23:50.718 }, 00:23:50.718 { 00:23:50.718 "name": "BaseBdev4", 00:23:50.718 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:50.718 "is_configured": true, 00:23:50.718 "data_offset": 2048, 00:23:50.718 "data_size": 63488 00:23:50.718 } 00:23:50.718 ] 00:23:50.718 }' 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.718 14:56:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 843799fd-8921-40d1-88d3-36df8474edbb 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.284 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.542 [2024-11-04 14:56:21.199371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:51.542 [2024-11-04 14:56:21.199913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:51.542 [2024-11-04 14:56:21.199938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:51.542 NewBaseBdev 00:23:51.542 [2024-11-04 14:56:21.200288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.542 [2024-11-04 14:56:21.206869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:51.542 [2024-11-04 14:56:21.207021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:51.542 [2024-11-04 14:56:21.207374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.542 [ 00:23:51.542 { 00:23:51.542 "name": "NewBaseBdev", 00:23:51.542 "aliases": [ 00:23:51.542 "843799fd-8921-40d1-88d3-36df8474edbb" 00:23:51.542 ], 00:23:51.542 "product_name": "Malloc disk", 00:23:51.542 "block_size": 512, 00:23:51.542 "num_blocks": 65536, 00:23:51.542 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:51.542 "assigned_rate_limits": { 00:23:51.542 "rw_ios_per_sec": 0, 00:23:51.542 "rw_mbytes_per_sec": 0, 00:23:51.542 "r_mbytes_per_sec": 0, 00:23:51.542 "w_mbytes_per_sec": 0 00:23:51.542 }, 00:23:51.542 "claimed": true, 00:23:51.542 "claim_type": "exclusive_write", 00:23:51.542 "zoned": false, 00:23:51.542 "supported_io_types": { 00:23:51.542 "read": true, 00:23:51.542 "write": true, 00:23:51.542 "unmap": true, 00:23:51.542 "flush": true, 00:23:51.542 "reset": true, 00:23:51.542 "nvme_admin": false, 00:23:51.542 "nvme_io": false, 00:23:51.542 "nvme_io_md": false, 00:23:51.542 "write_zeroes": true, 00:23:51.542 "zcopy": true, 00:23:51.542 "get_zone_info": false, 00:23:51.542 "zone_management": false, 00:23:51.542 "zone_append": false, 00:23:51.542 "compare": false, 00:23:51.542 "compare_and_write": false, 00:23:51.542 "abort": true, 00:23:51.542 "seek_hole": false, 00:23:51.542 "seek_data": false, 00:23:51.542 "copy": true, 00:23:51.542 "nvme_iov_md": false 00:23:51.542 }, 00:23:51.542 "memory_domains": [ 00:23:51.542 { 00:23:51.542 "dma_device_id": "system", 00:23:51.542 "dma_device_type": 1 00:23:51.542 }, 00:23:51.542 { 00:23:51.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.542 "dma_device_type": 2 00:23:51.542 } 00:23:51.542 ], 00:23:51.542 "driver_specific": {} 00:23:51.542 } 00:23:51.542 ] 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.542 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.542 "name": "Existed_Raid", 00:23:51.542 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:51.542 "strip_size_kb": 64, 00:23:51.542 "state": "online", 00:23:51.542 "raid_level": "raid5f", 00:23:51.542 "superblock": true, 00:23:51.542 "num_base_bdevs": 4, 00:23:51.542 "num_base_bdevs_discovered": 4, 00:23:51.542 "num_base_bdevs_operational": 4, 00:23:51.542 "base_bdevs_list": [ 00:23:51.542 { 00:23:51.542 "name": "NewBaseBdev", 00:23:51.542 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:51.542 "is_configured": true, 00:23:51.542 "data_offset": 2048, 00:23:51.542 "data_size": 63488 00:23:51.542 }, 00:23:51.542 { 00:23:51.543 "name": "BaseBdev2", 00:23:51.543 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:51.543 "is_configured": true, 00:23:51.543 "data_offset": 2048, 00:23:51.543 "data_size": 63488 00:23:51.543 }, 00:23:51.543 { 00:23:51.543 "name": "BaseBdev3", 00:23:51.543 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:51.543 "is_configured": true, 00:23:51.543 "data_offset": 2048, 00:23:51.543 "data_size": 63488 00:23:51.543 }, 00:23:51.543 { 00:23:51.543 "name": "BaseBdev4", 00:23:51.543 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:51.543 "is_configured": true, 00:23:51.543 "data_offset": 2048, 00:23:51.543 "data_size": 63488 00:23:51.543 } 00:23:51.543 ] 00:23:51.543 }' 00:23:51.543 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.543 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:52.109 [2024-11-04 14:56:21.772259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:52.109 "name": "Existed_Raid", 00:23:52.109 "aliases": [ 00:23:52.109 "1b80ee7e-863b-4107-880d-0deb832436ea" 00:23:52.109 ], 00:23:52.109 "product_name": "Raid Volume", 00:23:52.109 "block_size": 512, 00:23:52.109 "num_blocks": 190464, 00:23:52.109 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:52.109 "assigned_rate_limits": { 00:23:52.109 "rw_ios_per_sec": 0, 00:23:52.109 "rw_mbytes_per_sec": 0, 00:23:52.109 "r_mbytes_per_sec": 0, 00:23:52.109 "w_mbytes_per_sec": 0 00:23:52.109 }, 00:23:52.109 "claimed": false, 00:23:52.109 "zoned": false, 00:23:52.109 "supported_io_types": { 00:23:52.109 "read": true, 00:23:52.109 "write": true, 00:23:52.109 "unmap": false, 00:23:52.109 "flush": false, 00:23:52.109 "reset": true, 00:23:52.109 "nvme_admin": false, 00:23:52.109 "nvme_io": false, 00:23:52.109 "nvme_io_md": false, 00:23:52.109 "write_zeroes": true, 00:23:52.109 "zcopy": false, 00:23:52.109 "get_zone_info": false, 00:23:52.109 "zone_management": false, 00:23:52.109 "zone_append": false, 00:23:52.109 "compare": false, 00:23:52.109 "compare_and_write": false, 00:23:52.109 "abort": false, 00:23:52.109 "seek_hole": false, 00:23:52.109 "seek_data": false, 00:23:52.109 "copy": false, 00:23:52.109 "nvme_iov_md": false 00:23:52.109 }, 00:23:52.109 "driver_specific": { 00:23:52.109 "raid": { 00:23:52.109 "uuid": "1b80ee7e-863b-4107-880d-0deb832436ea", 00:23:52.109 "strip_size_kb": 64, 00:23:52.109 "state": "online", 00:23:52.109 "raid_level": "raid5f", 00:23:52.109 "superblock": true, 00:23:52.109 "num_base_bdevs": 4, 00:23:52.109 "num_base_bdevs_discovered": 4, 00:23:52.109 "num_base_bdevs_operational": 4, 00:23:52.109 "base_bdevs_list": [ 00:23:52.109 { 00:23:52.109 "name": "NewBaseBdev", 00:23:52.109 "uuid": "843799fd-8921-40d1-88d3-36df8474edbb", 00:23:52.109 "is_configured": true, 00:23:52.109 "data_offset": 2048, 00:23:52.109 "data_size": 63488 00:23:52.109 }, 00:23:52.109 { 00:23:52.109 "name": "BaseBdev2", 00:23:52.109 "uuid": "45c5976f-d733-454d-9abf-8ab350d81774", 00:23:52.109 "is_configured": true, 00:23:52.109 "data_offset": 2048, 00:23:52.109 "data_size": 63488 00:23:52.109 }, 00:23:52.109 { 00:23:52.109 "name": "BaseBdev3", 00:23:52.109 "uuid": "6b9875c3-3328-4a52-93b6-652ac8c70c5a", 00:23:52.109 "is_configured": true, 00:23:52.109 "data_offset": 2048, 00:23:52.109 "data_size": 63488 00:23:52.109 }, 00:23:52.109 { 00:23:52.109 "name": "BaseBdev4", 00:23:52.109 "uuid": "fa64f161-6f8c-43b4-aec8-a186ce1e36dc", 00:23:52.109 "is_configured": true, 00:23:52.109 "data_offset": 2048, 00:23:52.109 "data_size": 63488 00:23:52.109 } 00:23:52.109 ] 00:23:52.109 } 00:23:52.109 } 00:23:52.109 }' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:52.109 BaseBdev2 00:23:52.109 BaseBdev3 00:23:52.109 BaseBdev4' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.109 14:56:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.368 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:52.368 14:56:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.368 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 [2024-11-04 14:56:22.111884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:52.369 [2024-11-04 14:56:22.111936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.369 [2024-11-04 14:56:22.112106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.369 [2024-11-04 14:56:22.112667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.369 [2024-11-04 14:56:22.112693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83993 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83993 ']' 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83993 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83993 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83993' 00:23:52.369 killing process with pid 83993 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83993 00:23:52.369 14:56:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83993 00:23:52.369 [2024-11-04 14:56:22.155903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:52.934 [2024-11-04 14:56:22.594970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:54.309 14:56:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:54.309 00:23:54.309 real 0m13.136s 00:23:54.309 user 0m21.490s 00:23:54.309 sys 0m1.915s 00:23:54.309 14:56:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.309 ************************************ 00:23:54.309 END TEST raid5f_state_function_test_sb 00:23:54.309 ************************************ 00:23:54.309 14:56:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.309 14:56:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:54.309 14:56:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:54.309 14:56:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:54.309 14:56:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:54.309 ************************************ 00:23:54.309 START TEST raid5f_superblock_test 00:23:54.309 ************************************ 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84676 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84676 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84676 ']' 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:54.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:54.309 14:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.309 [2024-11-04 14:56:23.981039] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:23:54.309 [2024-11-04 14:56:23.982110] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84676 ] 00:23:54.309 [2024-11-04 14:56:24.166780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.567 [2024-11-04 14:56:24.332452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.824 [2024-11-04 14:56:24.551573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.824 [2024-11-04 14:56:24.551653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.389 14:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.389 malloc1 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.389 [2024-11-04 14:56:25.024452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:55.389 [2024-11-04 14:56:25.024557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.389 [2024-11-04 14:56:25.024588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:55.389 [2024-11-04 14:56:25.024603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.389 [2024-11-04 14:56:25.027650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.389 [2024-11-04 14:56:25.027861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:55.389 pt1 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:55.389 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 malloc2 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 [2024-11-04 14:56:25.082388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:55.390 [2024-11-04 14:56:25.082716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.390 [2024-11-04 14:56:25.082962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:55.390 [2024-11-04 14:56:25.083116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.390 [2024-11-04 14:56:25.086087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.390 pt2 00:23:55.390 [2024-11-04 14:56:25.086271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 malloc3 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 [2024-11-04 14:56:25.154338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:55.390 [2024-11-04 14:56:25.154427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.390 [2024-11-04 14:56:25.154457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:55.390 [2024-11-04 14:56:25.154473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.390 [2024-11-04 14:56:25.157247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.390 [2024-11-04 14:56:25.157305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:55.390 pt3 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 malloc4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 [2024-11-04 14:56:25.213230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:55.390 [2024-11-04 14:56:25.213568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.390 [2024-11-04 14:56:25.213657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:55.390 [2024-11-04 14:56:25.213779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.390 [2024-11-04 14:56:25.216635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.390 [2024-11-04 14:56:25.216677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:55.390 pt4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 [2024-11-04 14:56:25.221387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:55.390 [2024-11-04 14:56:25.223874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:55.390 [2024-11-04 14:56:25.223966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:55.390 [2024-11-04 14:56:25.224060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:55.390 [2024-11-04 14:56:25.224370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:55.390 [2024-11-04 14:56:25.224392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:55.390 [2024-11-04 14:56:25.224672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:55.390 [2024-11-04 14:56:25.231722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:55.390 [2024-11-04 14:56:25.231761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:55.390 [2024-11-04 14:56:25.232006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.390 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.648 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.648 "name": "raid_bdev1", 00:23:55.648 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:55.648 "strip_size_kb": 64, 00:23:55.648 "state": "online", 00:23:55.648 "raid_level": "raid5f", 00:23:55.648 "superblock": true, 00:23:55.648 "num_base_bdevs": 4, 00:23:55.648 "num_base_bdevs_discovered": 4, 00:23:55.648 "num_base_bdevs_operational": 4, 00:23:55.648 "base_bdevs_list": [ 00:23:55.648 { 00:23:55.648 "name": "pt1", 00:23:55.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:55.648 "is_configured": true, 00:23:55.648 "data_offset": 2048, 00:23:55.648 "data_size": 63488 00:23:55.648 }, 00:23:55.648 { 00:23:55.648 "name": "pt2", 00:23:55.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:55.648 "is_configured": true, 00:23:55.648 "data_offset": 2048, 00:23:55.648 "data_size": 63488 00:23:55.648 }, 00:23:55.648 { 00:23:55.648 "name": "pt3", 00:23:55.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:55.648 "is_configured": true, 00:23:55.648 "data_offset": 2048, 00:23:55.648 "data_size": 63488 00:23:55.648 }, 00:23:55.648 { 00:23:55.648 "name": "pt4", 00:23:55.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:55.648 "is_configured": true, 00:23:55.648 "data_offset": 2048, 00:23:55.648 "data_size": 63488 00:23:55.648 } 00:23:55.648 ] 00:23:55.648 }' 00:23:55.648 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.648 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.906 [2024-11-04 14:56:25.764131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:55.906 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.175 "name": "raid_bdev1", 00:23:56.175 "aliases": [ 00:23:56.175 "2da4f163-7e97-4809-9afa-a3708a10e6ce" 00:23:56.175 ], 00:23:56.175 "product_name": "Raid Volume", 00:23:56.175 "block_size": 512, 00:23:56.175 "num_blocks": 190464, 00:23:56.175 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:56.175 "assigned_rate_limits": { 00:23:56.175 "rw_ios_per_sec": 0, 00:23:56.175 "rw_mbytes_per_sec": 0, 00:23:56.175 "r_mbytes_per_sec": 0, 00:23:56.175 "w_mbytes_per_sec": 0 00:23:56.175 }, 00:23:56.175 "claimed": false, 00:23:56.175 "zoned": false, 00:23:56.175 "supported_io_types": { 00:23:56.175 "read": true, 00:23:56.175 "write": true, 00:23:56.175 "unmap": false, 00:23:56.175 "flush": false, 00:23:56.175 "reset": true, 00:23:56.175 "nvme_admin": false, 00:23:56.175 "nvme_io": false, 00:23:56.175 "nvme_io_md": false, 00:23:56.175 "write_zeroes": true, 00:23:56.175 "zcopy": false, 00:23:56.175 "get_zone_info": false, 00:23:56.175 "zone_management": false, 00:23:56.175 "zone_append": false, 00:23:56.175 "compare": false, 00:23:56.175 "compare_and_write": false, 00:23:56.175 "abort": false, 00:23:56.175 "seek_hole": false, 00:23:56.175 "seek_data": false, 00:23:56.175 "copy": false, 00:23:56.175 "nvme_iov_md": false 00:23:56.175 }, 00:23:56.175 "driver_specific": { 00:23:56.175 "raid": { 00:23:56.175 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:56.175 "strip_size_kb": 64, 00:23:56.175 "state": "online", 00:23:56.175 "raid_level": "raid5f", 00:23:56.175 "superblock": true, 00:23:56.175 "num_base_bdevs": 4, 00:23:56.175 "num_base_bdevs_discovered": 4, 00:23:56.175 "num_base_bdevs_operational": 4, 00:23:56.175 "base_bdevs_list": [ 00:23:56.175 { 00:23:56.175 "name": "pt1", 00:23:56.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:56.175 "is_configured": true, 00:23:56.175 "data_offset": 2048, 00:23:56.175 "data_size": 63488 00:23:56.175 }, 00:23:56.175 { 00:23:56.175 "name": "pt2", 00:23:56.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.175 "is_configured": true, 00:23:56.175 "data_offset": 2048, 00:23:56.175 "data_size": 63488 00:23:56.175 }, 00:23:56.175 { 00:23:56.175 "name": "pt3", 00:23:56.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:56.175 "is_configured": true, 00:23:56.175 "data_offset": 2048, 00:23:56.175 "data_size": 63488 00:23:56.175 }, 00:23:56.175 { 00:23:56.175 "name": "pt4", 00:23:56.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:56.175 "is_configured": true, 00:23:56.175 "data_offset": 2048, 00:23:56.175 "data_size": 63488 00:23:56.175 } 00:23:56.175 ] 00:23:56.175 } 00:23:56.175 } 00:23:56.175 }' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:56.175 pt2 00:23:56.175 pt3 00:23:56.175 pt4' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.175 14:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:56.176 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 [2024-11-04 14:56:26.124166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2da4f163-7e97-4809-9afa-a3708a10e6ce 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2da4f163-7e97-4809-9afa-a3708a10e6ce ']' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 [2024-11-04 14:56:26.171958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:56.464 [2024-11-04 14:56:26.171998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:56.464 [2024-11-04 14:56:26.172096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.464 [2024-11-04 14:56:26.172210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.464 [2024-11-04 14:56:26.172252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.464 [2024-11-04 14:56:26.332043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:56.464 [2024-11-04 14:56:26.334654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:56.464 [2024-11-04 14:56:26.334718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:56.464 [2024-11-04 14:56:26.334769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:56.464 [2024-11-04 14:56:26.334839] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:56.464 [2024-11-04 14:56:26.334938] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:56.464 [2024-11-04 14:56:26.334979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:56.464 [2024-11-04 14:56:26.335013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:56.464 [2024-11-04 14:56:26.335037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:56.464 [2024-11-04 14:56:26.335053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:56.464 request: 00:23:56.464 { 00:23:56.464 "name": "raid_bdev1", 00:23:56.464 "raid_level": "raid5f", 00:23:56.464 "base_bdevs": [ 00:23:56.464 "malloc1", 00:23:56.464 "malloc2", 00:23:56.464 "malloc3", 00:23:56.464 "malloc4" 00:23:56.464 ], 00:23:56.464 "strip_size_kb": 64, 00:23:56.464 "superblock": false, 00:23:56.464 "method": "bdev_raid_create", 00:23:56.464 "req_id": 1 00:23:56.464 } 00:23:56.464 Got JSON-RPC error response 00:23:56.464 response: 00:23:56.464 { 00:23:56.464 "code": -17, 00:23:56.464 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:56.464 } 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:56.464 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.465 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.723 [2024-11-04 14:56:26.395975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:56.723 [2024-11-04 14:56:26.396043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.723 [2024-11-04 14:56:26.396070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:56.723 [2024-11-04 14:56:26.396088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.723 [2024-11-04 14:56:26.398897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.723 [2024-11-04 14:56:26.398967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:56.723 [2024-11-04 14:56:26.399053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:56.723 [2024-11-04 14:56:26.399128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:56.723 pt1 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.723 "name": "raid_bdev1", 00:23:56.723 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:56.723 "strip_size_kb": 64, 00:23:56.723 "state": "configuring", 00:23:56.723 "raid_level": "raid5f", 00:23:56.723 "superblock": true, 00:23:56.723 "num_base_bdevs": 4, 00:23:56.723 "num_base_bdevs_discovered": 1, 00:23:56.723 "num_base_bdevs_operational": 4, 00:23:56.723 "base_bdevs_list": [ 00:23:56.723 { 00:23:56.723 "name": "pt1", 00:23:56.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:56.723 "is_configured": true, 00:23:56.723 "data_offset": 2048, 00:23:56.723 "data_size": 63488 00:23:56.723 }, 00:23:56.723 { 00:23:56.723 "name": null, 00:23:56.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.723 "is_configured": false, 00:23:56.723 "data_offset": 2048, 00:23:56.723 "data_size": 63488 00:23:56.723 }, 00:23:56.723 { 00:23:56.723 "name": null, 00:23:56.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:56.723 "is_configured": false, 00:23:56.723 "data_offset": 2048, 00:23:56.723 "data_size": 63488 00:23:56.723 }, 00:23:56.723 { 00:23:56.723 "name": null, 00:23:56.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:56.723 "is_configured": false, 00:23:56.723 "data_offset": 2048, 00:23:56.723 "data_size": 63488 00:23:56.723 } 00:23:56.723 ] 00:23:56.723 }' 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.723 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.289 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:23:57.289 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.289 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.289 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.289 [2024-11-04 14:56:26.928265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.289 [2024-11-04 14:56:26.928384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.290 [2024-11-04 14:56:26.928414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:57.290 [2024-11-04 14:56:26.928432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.290 [2024-11-04 14:56:26.929014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.290 [2024-11-04 14:56:26.929051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.290 [2024-11-04 14:56:26.929152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:57.290 [2024-11-04 14:56:26.929187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:57.290 pt2 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 [2024-11-04 14:56:26.936209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.290 "name": "raid_bdev1", 00:23:57.290 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:57.290 "strip_size_kb": 64, 00:23:57.290 "state": "configuring", 00:23:57.290 "raid_level": "raid5f", 00:23:57.290 "superblock": true, 00:23:57.290 "num_base_bdevs": 4, 00:23:57.290 "num_base_bdevs_discovered": 1, 00:23:57.290 "num_base_bdevs_operational": 4, 00:23:57.290 "base_bdevs_list": [ 00:23:57.290 { 00:23:57.290 "name": "pt1", 00:23:57.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.290 "is_configured": true, 00:23:57.290 "data_offset": 2048, 00:23:57.290 "data_size": 63488 00:23:57.290 }, 00:23:57.290 { 00:23:57.290 "name": null, 00:23:57.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.290 "is_configured": false, 00:23:57.290 "data_offset": 0, 00:23:57.290 "data_size": 63488 00:23:57.290 }, 00:23:57.290 { 00:23:57.290 "name": null, 00:23:57.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.290 "is_configured": false, 00:23:57.290 "data_offset": 2048, 00:23:57.290 "data_size": 63488 00:23:57.290 }, 00:23:57.290 { 00:23:57.290 "name": null, 00:23:57.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:57.290 "is_configured": false, 00:23:57.290 "data_offset": 2048, 00:23:57.290 "data_size": 63488 00:23:57.290 } 00:23:57.290 ] 00:23:57.290 }' 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.290 14:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.857 [2024-11-04 14:56:27.456496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.857 [2024-11-04 14:56:27.456606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.857 [2024-11-04 14:56:27.456639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:57.857 [2024-11-04 14:56:27.456655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.857 [2024-11-04 14:56:27.457292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.857 [2024-11-04 14:56:27.457318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.857 [2024-11-04 14:56:27.457423] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:57.857 [2024-11-04 14:56:27.457454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:57.857 pt2 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:57.857 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.858 [2024-11-04 14:56:27.468414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:57.858 [2024-11-04 14:56:27.468486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.858 [2024-11-04 14:56:27.468530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:57.858 [2024-11-04 14:56:27.468544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.858 [2024-11-04 14:56:27.468982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.858 [2024-11-04 14:56:27.469029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:57.858 [2024-11-04 14:56:27.469107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:57.858 [2024-11-04 14:56:27.469135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:57.858 pt3 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.858 [2024-11-04 14:56:27.476383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:57.858 [2024-11-04 14:56:27.476443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.858 [2024-11-04 14:56:27.476479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:57.858 [2024-11-04 14:56:27.476493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.858 [2024-11-04 14:56:27.476926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.858 [2024-11-04 14:56:27.476955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:57.858 [2024-11-04 14:56:27.477031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:57.858 [2024-11-04 14:56:27.477073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:57.858 [2024-11-04 14:56:27.477256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:57.858 [2024-11-04 14:56:27.477272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:57.858 [2024-11-04 14:56:27.477578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:57.858 [2024-11-04 14:56:27.484024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:57.858 [2024-11-04 14:56:27.484055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:57.858 [2024-11-04 14:56:27.484318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.858 pt4 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.858 "name": "raid_bdev1", 00:23:57.858 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:57.858 "strip_size_kb": 64, 00:23:57.858 "state": "online", 00:23:57.858 "raid_level": "raid5f", 00:23:57.858 "superblock": true, 00:23:57.858 "num_base_bdevs": 4, 00:23:57.858 "num_base_bdevs_discovered": 4, 00:23:57.858 "num_base_bdevs_operational": 4, 00:23:57.858 "base_bdevs_list": [ 00:23:57.858 { 00:23:57.858 "name": "pt1", 00:23:57.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.858 "is_configured": true, 00:23:57.858 "data_offset": 2048, 00:23:57.858 "data_size": 63488 00:23:57.858 }, 00:23:57.858 { 00:23:57.858 "name": "pt2", 00:23:57.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.858 "is_configured": true, 00:23:57.858 "data_offset": 2048, 00:23:57.858 "data_size": 63488 00:23:57.858 }, 00:23:57.858 { 00:23:57.858 "name": "pt3", 00:23:57.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.858 "is_configured": true, 00:23:57.858 "data_offset": 2048, 00:23:57.858 "data_size": 63488 00:23:57.858 }, 00:23:57.858 { 00:23:57.858 "name": "pt4", 00:23:57.858 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:57.858 "is_configured": true, 00:23:57.858 "data_offset": 2048, 00:23:57.858 "data_size": 63488 00:23:57.858 } 00:23:57.858 ] 00:23:57.858 }' 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.858 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.116 14:56:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.116 [2024-11-04 14:56:28.004177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.401 "name": "raid_bdev1", 00:23:58.401 "aliases": [ 00:23:58.401 "2da4f163-7e97-4809-9afa-a3708a10e6ce" 00:23:58.401 ], 00:23:58.401 "product_name": "Raid Volume", 00:23:58.401 "block_size": 512, 00:23:58.401 "num_blocks": 190464, 00:23:58.401 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:58.401 "assigned_rate_limits": { 00:23:58.401 "rw_ios_per_sec": 0, 00:23:58.401 "rw_mbytes_per_sec": 0, 00:23:58.401 "r_mbytes_per_sec": 0, 00:23:58.401 "w_mbytes_per_sec": 0 00:23:58.401 }, 00:23:58.401 "claimed": false, 00:23:58.401 "zoned": false, 00:23:58.401 "supported_io_types": { 00:23:58.401 "read": true, 00:23:58.401 "write": true, 00:23:58.401 "unmap": false, 00:23:58.401 "flush": false, 00:23:58.401 "reset": true, 00:23:58.401 "nvme_admin": false, 00:23:58.401 "nvme_io": false, 00:23:58.401 "nvme_io_md": false, 00:23:58.401 "write_zeroes": true, 00:23:58.401 "zcopy": false, 00:23:58.401 "get_zone_info": false, 00:23:58.401 "zone_management": false, 00:23:58.401 "zone_append": false, 00:23:58.401 "compare": false, 00:23:58.401 "compare_and_write": false, 00:23:58.401 "abort": false, 00:23:58.401 "seek_hole": false, 00:23:58.401 "seek_data": false, 00:23:58.401 "copy": false, 00:23:58.401 "nvme_iov_md": false 00:23:58.401 }, 00:23:58.401 "driver_specific": { 00:23:58.401 "raid": { 00:23:58.401 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:58.401 "strip_size_kb": 64, 00:23:58.401 "state": "online", 00:23:58.401 "raid_level": "raid5f", 00:23:58.401 "superblock": true, 00:23:58.401 "num_base_bdevs": 4, 00:23:58.401 "num_base_bdevs_discovered": 4, 00:23:58.401 "num_base_bdevs_operational": 4, 00:23:58.401 "base_bdevs_list": [ 00:23:58.401 { 00:23:58.401 "name": "pt1", 00:23:58.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.401 "is_configured": true, 00:23:58.401 "data_offset": 2048, 00:23:58.401 "data_size": 63488 00:23:58.401 }, 00:23:58.401 { 00:23:58.401 "name": "pt2", 00:23:58.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.401 "is_configured": true, 00:23:58.401 "data_offset": 2048, 00:23:58.401 "data_size": 63488 00:23:58.401 }, 00:23:58.401 { 00:23:58.401 "name": "pt3", 00:23:58.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:58.401 "is_configured": true, 00:23:58.401 "data_offset": 2048, 00:23:58.401 "data_size": 63488 00:23:58.401 }, 00:23:58.401 { 00:23:58.401 "name": "pt4", 00:23:58.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:58.401 "is_configured": true, 00:23:58.401 "data_offset": 2048, 00:23:58.401 "data_size": 63488 00:23:58.401 } 00:23:58.401 ] 00:23:58.401 } 00:23:58.401 } 00:23:58.401 }' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:58.401 pt2 00:23:58.401 pt3 00:23:58.401 pt4' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.401 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:58.660 [2024-11-04 14:56:28.376157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2da4f163-7e97-4809-9afa-a3708a10e6ce '!=' 2da4f163-7e97-4809-9afa-a3708a10e6ce ']' 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.660 [2024-11-04 14:56:28.420027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.660 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.660 "name": "raid_bdev1", 00:23:58.660 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:58.660 "strip_size_kb": 64, 00:23:58.661 "state": "online", 00:23:58.661 "raid_level": "raid5f", 00:23:58.661 "superblock": true, 00:23:58.661 "num_base_bdevs": 4, 00:23:58.661 "num_base_bdevs_discovered": 3, 00:23:58.661 "num_base_bdevs_operational": 3, 00:23:58.661 "base_bdevs_list": [ 00:23:58.661 { 00:23:58.661 "name": null, 00:23:58.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.661 "is_configured": false, 00:23:58.661 "data_offset": 0, 00:23:58.661 "data_size": 63488 00:23:58.661 }, 00:23:58.661 { 00:23:58.661 "name": "pt2", 00:23:58.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.661 "is_configured": true, 00:23:58.661 "data_offset": 2048, 00:23:58.661 "data_size": 63488 00:23:58.661 }, 00:23:58.661 { 00:23:58.661 "name": "pt3", 00:23:58.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:58.661 "is_configured": true, 00:23:58.661 "data_offset": 2048, 00:23:58.661 "data_size": 63488 00:23:58.661 }, 00:23:58.661 { 00:23:58.661 "name": "pt4", 00:23:58.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:58.661 "is_configured": true, 00:23:58.661 "data_offset": 2048, 00:23:58.661 "data_size": 63488 00:23:58.661 } 00:23:58.661 ] 00:23:58.661 }' 00:23:58.661 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.661 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 [2024-11-04 14:56:28.956154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.228 [2024-11-04 14:56:28.956192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.228 [2024-11-04 14:56:28.956336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.228 [2024-11-04 14:56:28.956444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.228 [2024-11-04 14:56:28.956461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:59.228 14:56:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.228 [2024-11-04 14:56:29.052144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:59.228 [2024-11-04 14:56:29.052223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.228 [2024-11-04 14:56:29.052302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:59.228 [2024-11-04 14:56:29.052321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.228 [2024-11-04 14:56:29.055695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.228 [2024-11-04 14:56:29.055736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:59.228 [2024-11-04 14:56:29.055868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:59.228 [2024-11-04 14:56:29.055962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:59.228 pt2 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:59.228 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.229 "name": "raid_bdev1", 00:23:59.229 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:59.229 "strip_size_kb": 64, 00:23:59.229 "state": "configuring", 00:23:59.229 "raid_level": "raid5f", 00:23:59.229 "superblock": true, 00:23:59.229 "num_base_bdevs": 4, 00:23:59.229 "num_base_bdevs_discovered": 1, 00:23:59.229 "num_base_bdevs_operational": 3, 00:23:59.229 "base_bdevs_list": [ 00:23:59.229 { 00:23:59.229 "name": null, 00:23:59.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.229 "is_configured": false, 00:23:59.229 "data_offset": 2048, 00:23:59.229 "data_size": 63488 00:23:59.229 }, 00:23:59.229 { 00:23:59.229 "name": "pt2", 00:23:59.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.229 "is_configured": true, 00:23:59.229 "data_offset": 2048, 00:23:59.229 "data_size": 63488 00:23:59.229 }, 00:23:59.229 { 00:23:59.229 "name": null, 00:23:59.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.229 "is_configured": false, 00:23:59.229 "data_offset": 2048, 00:23:59.229 "data_size": 63488 00:23:59.229 }, 00:23:59.229 { 00:23:59.229 "name": null, 00:23:59.229 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:59.229 "is_configured": false, 00:23:59.229 "data_offset": 2048, 00:23:59.229 "data_size": 63488 00:23:59.229 } 00:23:59.229 ] 00:23:59.229 }' 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.229 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.794 [2024-11-04 14:56:29.588461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:59.794 [2024-11-04 14:56:29.588713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.794 [2024-11-04 14:56:29.588761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:59.794 [2024-11-04 14:56:29.588779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.794 [2024-11-04 14:56:29.589475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.794 [2024-11-04 14:56:29.589501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:59.794 [2024-11-04 14:56:29.589670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:59.794 [2024-11-04 14:56:29.589713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:59.794 pt3 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.794 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.794 "name": "raid_bdev1", 00:23:59.794 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:23:59.794 "strip_size_kb": 64, 00:23:59.794 "state": "configuring", 00:23:59.794 "raid_level": "raid5f", 00:23:59.794 "superblock": true, 00:23:59.794 "num_base_bdevs": 4, 00:23:59.794 "num_base_bdevs_discovered": 2, 00:23:59.794 "num_base_bdevs_operational": 3, 00:23:59.794 "base_bdevs_list": [ 00:23:59.794 { 00:23:59.794 "name": null, 00:23:59.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.794 "is_configured": false, 00:23:59.794 "data_offset": 2048, 00:23:59.794 "data_size": 63488 00:23:59.794 }, 00:23:59.795 { 00:23:59.795 "name": "pt2", 00:23:59.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.795 "is_configured": true, 00:23:59.795 "data_offset": 2048, 00:23:59.795 "data_size": 63488 00:23:59.795 }, 00:23:59.795 { 00:23:59.795 "name": "pt3", 00:23:59.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.795 "is_configured": true, 00:23:59.795 "data_offset": 2048, 00:23:59.795 "data_size": 63488 00:23:59.795 }, 00:23:59.795 { 00:23:59.795 "name": null, 00:23:59.795 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:59.795 "is_configured": false, 00:23:59.795 "data_offset": 2048, 00:23:59.795 "data_size": 63488 00:23:59.795 } 00:23:59.795 ] 00:23:59.795 }' 00:23:59.795 14:56:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.795 14:56:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.358 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:24:00.358 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:00.358 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:24:00.358 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:00.358 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.358 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.358 [2024-11-04 14:56:30.108693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:00.358 [2024-11-04 14:56:30.108791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.358 [2024-11-04 14:56:30.108830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:00.359 [2024-11-04 14:56:30.108845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.359 [2024-11-04 14:56:30.109501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.359 [2024-11-04 14:56:30.109527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:00.359 [2024-11-04 14:56:30.109671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:00.359 [2024-11-04 14:56:30.109714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:00.359 [2024-11-04 14:56:30.109899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:00.359 [2024-11-04 14:56:30.109915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:00.359 [2024-11-04 14:56:30.110283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:00.359 [2024-11-04 14:56:30.117000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:00.359 [2024-11-04 14:56:30.117030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:00.359 [2024-11-04 14:56:30.117416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.359 pt4 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.359 "name": "raid_bdev1", 00:24:00.359 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:24:00.359 "strip_size_kb": 64, 00:24:00.359 "state": "online", 00:24:00.359 "raid_level": "raid5f", 00:24:00.359 "superblock": true, 00:24:00.359 "num_base_bdevs": 4, 00:24:00.359 "num_base_bdevs_discovered": 3, 00:24:00.359 "num_base_bdevs_operational": 3, 00:24:00.359 "base_bdevs_list": [ 00:24:00.359 { 00:24:00.359 "name": null, 00:24:00.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.359 "is_configured": false, 00:24:00.359 "data_offset": 2048, 00:24:00.359 "data_size": 63488 00:24:00.359 }, 00:24:00.359 { 00:24:00.359 "name": "pt2", 00:24:00.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.359 "is_configured": true, 00:24:00.359 "data_offset": 2048, 00:24:00.359 "data_size": 63488 00:24:00.359 }, 00:24:00.359 { 00:24:00.359 "name": "pt3", 00:24:00.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:00.359 "is_configured": true, 00:24:00.359 "data_offset": 2048, 00:24:00.359 "data_size": 63488 00:24:00.359 }, 00:24:00.359 { 00:24:00.359 "name": "pt4", 00:24:00.359 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:00.359 "is_configured": true, 00:24:00.359 "data_offset": 2048, 00:24:00.359 "data_size": 63488 00:24:00.359 } 00:24:00.359 ] 00:24:00.359 }' 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.359 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 [2024-11-04 14:56:30.653751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.926 [2024-11-04 14:56:30.653791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:00.926 [2024-11-04 14:56:30.653957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.926 [2024-11-04 14:56:30.654091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.926 [2024-11-04 14:56:30.654114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 [2024-11-04 14:56:30.725749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:00.926 [2024-11-04 14:56:30.726007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.926 [2024-11-04 14:56:30.726088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:00.926 [2024-11-04 14:56:30.726316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.926 [2024-11-04 14:56:30.729889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.926 [2024-11-04 14:56:30.730146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:00.926 [2024-11-04 14:56:30.730325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:00.926 [2024-11-04 14:56:30.730418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:00.926 [2024-11-04 14:56:30.730719] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:00.926 [2024-11-04 14:56:30.730741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.926 pt1 00:24:00.926 [2024-11-04 14:56:30.730762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:00.926 [2024-11-04 14:56:30.730832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:00.926 [2024-11-04 14:56:30.731012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.926 "name": "raid_bdev1", 00:24:00.926 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:24:00.926 "strip_size_kb": 64, 00:24:00.926 "state": "configuring", 00:24:00.926 "raid_level": "raid5f", 00:24:00.926 "superblock": true, 00:24:00.926 "num_base_bdevs": 4, 00:24:00.926 "num_base_bdevs_discovered": 2, 00:24:00.926 "num_base_bdevs_operational": 3, 00:24:00.926 "base_bdevs_list": [ 00:24:00.926 { 00:24:00.926 "name": null, 00:24:00.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.926 "is_configured": false, 00:24:00.926 "data_offset": 2048, 00:24:00.926 "data_size": 63488 00:24:00.926 }, 00:24:00.926 { 00:24:00.926 "name": "pt2", 00:24:00.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.926 "is_configured": true, 00:24:00.926 "data_offset": 2048, 00:24:00.926 "data_size": 63488 00:24:00.926 }, 00:24:00.926 { 00:24:00.926 "name": "pt3", 00:24:00.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:00.926 "is_configured": true, 00:24:00.926 "data_offset": 2048, 00:24:00.926 "data_size": 63488 00:24:00.926 }, 00:24:00.926 { 00:24:00.926 "name": null, 00:24:00.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:00.926 "is_configured": false, 00:24:00.926 "data_offset": 2048, 00:24:00.926 "data_size": 63488 00:24:00.926 } 00:24:00.926 ] 00:24:00.926 }' 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.926 14:56:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.492 [2024-11-04 14:56:31.334636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:01.492 [2024-11-04 14:56:31.334734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.492 [2024-11-04 14:56:31.334777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:01.492 [2024-11-04 14:56:31.334792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.492 [2024-11-04 14:56:31.335476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.492 [2024-11-04 14:56:31.335502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:01.492 [2024-11-04 14:56:31.335673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:01.492 [2024-11-04 14:56:31.335713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:01.492 [2024-11-04 14:56:31.335904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:01.492 [2024-11-04 14:56:31.335918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:01.492 [2024-11-04 14:56:31.336949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:01.492 [2024-11-04 14:56:31.343451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:01.492 [2024-11-04 14:56:31.343480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:01.492 [2024-11-04 14:56:31.343840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.492 pt4 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.492 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.750 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.750 "name": "raid_bdev1", 00:24:01.750 "uuid": "2da4f163-7e97-4809-9afa-a3708a10e6ce", 00:24:01.750 "strip_size_kb": 64, 00:24:01.750 "state": "online", 00:24:01.750 "raid_level": "raid5f", 00:24:01.750 "superblock": true, 00:24:01.750 "num_base_bdevs": 4, 00:24:01.750 "num_base_bdevs_discovered": 3, 00:24:01.750 "num_base_bdevs_operational": 3, 00:24:01.750 "base_bdevs_list": [ 00:24:01.750 { 00:24:01.750 "name": null, 00:24:01.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.750 "is_configured": false, 00:24:01.750 "data_offset": 2048, 00:24:01.750 "data_size": 63488 00:24:01.750 }, 00:24:01.750 { 00:24:01.750 "name": "pt2", 00:24:01.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:01.750 "is_configured": true, 00:24:01.750 "data_offset": 2048, 00:24:01.750 "data_size": 63488 00:24:01.751 }, 00:24:01.751 { 00:24:01.751 "name": "pt3", 00:24:01.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:01.751 "is_configured": true, 00:24:01.751 "data_offset": 2048, 00:24:01.751 "data_size": 63488 00:24:01.751 }, 00:24:01.751 { 00:24:01.751 "name": "pt4", 00:24:01.751 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:01.751 "is_configured": true, 00:24:01.751 "data_offset": 2048, 00:24:01.751 "data_size": 63488 00:24:01.751 } 00:24:01.751 ] 00:24:01.751 }' 00:24:01.751 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.751 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:02.008 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.008 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.009 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.267 [2024-11-04 14:56:31.952485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2da4f163-7e97-4809-9afa-a3708a10e6ce '!=' 2da4f163-7e97-4809-9afa-a3708a10e6ce ']' 00:24:02.267 14:56:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84676 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84676 ']' 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84676 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84676 00:24:02.267 killing process with pid 84676 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84676' 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84676 00:24:02.267 [2024-11-04 14:56:32.035009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:02.267 14:56:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84676 00:24:02.267 [2024-11-04 14:56:32.035133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.267 [2024-11-04 14:56:32.035243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.267 [2024-11-04 14:56:32.035265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:02.835 [2024-11-04 14:56:32.429105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:03.770 14:56:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:03.770 00:24:03.770 real 0m9.641s 00:24:03.770 user 0m15.817s 00:24:03.770 sys 0m1.328s 00:24:03.770 ************************************ 00:24:03.770 END TEST raid5f_superblock_test 00:24:03.770 ************************************ 00:24:03.770 14:56:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:03.770 14:56:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.770 14:56:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:24:03.770 14:56:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:24:03.770 14:56:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:03.770 14:56:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:03.770 14:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:03.770 ************************************ 00:24:03.770 START TEST raid5f_rebuild_test 00:24:03.770 ************************************ 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:24:03.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85168 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85168 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85168 ']' 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.770 14:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:03.771 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.771 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.771 14:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.029 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:04.029 Zero copy mechanism will not be used. 00:24:04.029 [2024-11-04 14:56:33.699459] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:24:04.029 [2024-11-04 14:56:33.699644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85168 ] 00:24:04.029 [2024-11-04 14:56:33.890487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.287 [2024-11-04 14:56:34.038119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.546 [2024-11-04 14:56:34.257924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:04.546 [2024-11-04 14:56:34.258360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 BaseBdev1_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 [2024-11-04 14:56:34.804855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:05.113 [2024-11-04 14:56:34.804940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.113 [2024-11-04 14:56:34.804972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:05.113 [2024-11-04 14:56:34.804989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.113 [2024-11-04 14:56:34.808252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.113 [2024-11-04 14:56:34.808328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:05.113 BaseBdev1 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 BaseBdev2_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 [2024-11-04 14:56:34.861448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:05.113 [2024-11-04 14:56:34.861535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.113 [2024-11-04 14:56:34.861563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:05.113 [2024-11-04 14:56:34.861592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.113 [2024-11-04 14:56:34.864921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.113 [2024-11-04 14:56:34.864983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:05.113 BaseBdev2 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 BaseBdev3_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 [2024-11-04 14:56:34.934346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:05.113 [2024-11-04 14:56:34.934426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.113 [2024-11-04 14:56:34.934456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:05.113 [2024-11-04 14:56:34.934473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.113 [2024-11-04 14:56:34.937538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.113 [2024-11-04 14:56:34.937786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:05.113 BaseBdev3 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.113 BaseBdev4_malloc 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.113 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:05.114 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.114 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.114 [2024-11-04 14:56:34.992934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:05.114 [2024-11-04 14:56:34.992998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.114 [2024-11-04 14:56:34.993059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:05.114 [2024-11-04 14:56:34.993077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.114 [2024-11-04 14:56:34.996594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.114 [2024-11-04 14:56:34.996637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:05.114 BaseBdev4 00:24:05.114 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.114 14:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:05.114 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.114 14:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.372 spare_malloc 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.372 spare_delay 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.372 [2024-11-04 14:56:35.055183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.372 [2024-11-04 14:56:35.055275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.372 [2024-11-04 14:56:35.055306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:05.372 [2024-11-04 14:56:35.055324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.372 [2024-11-04 14:56:35.058541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.372 [2024-11-04 14:56:35.058630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.372 spare 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.372 [2024-11-04 14:56:35.067281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.372 [2024-11-04 14:56:35.069972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:05.372 [2024-11-04 14:56:35.070117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:05.372 [2024-11-04 14:56:35.070218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:05.372 [2024-11-04 14:56:35.070412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:05.372 [2024-11-04 14:56:35.070436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:05.372 [2024-11-04 14:56:35.070778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:05.372 [2024-11-04 14:56:35.077310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:05.372 [2024-11-04 14:56:35.077337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:05.372 [2024-11-04 14:56:35.077635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.372 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.372 "name": "raid_bdev1", 00:24:05.372 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:05.372 "strip_size_kb": 64, 00:24:05.373 "state": "online", 00:24:05.373 "raid_level": "raid5f", 00:24:05.373 "superblock": false, 00:24:05.373 "num_base_bdevs": 4, 00:24:05.373 "num_base_bdevs_discovered": 4, 00:24:05.373 "num_base_bdevs_operational": 4, 00:24:05.373 "base_bdevs_list": [ 00:24:05.373 { 00:24:05.373 "name": "BaseBdev1", 00:24:05.373 "uuid": "901b9044-c7d9-531d-9d7a-7734bc900720", 00:24:05.373 "is_configured": true, 00:24:05.373 "data_offset": 0, 00:24:05.373 "data_size": 65536 00:24:05.373 }, 00:24:05.373 { 00:24:05.373 "name": "BaseBdev2", 00:24:05.373 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:05.373 "is_configured": true, 00:24:05.373 "data_offset": 0, 00:24:05.373 "data_size": 65536 00:24:05.373 }, 00:24:05.373 { 00:24:05.373 "name": "BaseBdev3", 00:24:05.373 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:05.373 "is_configured": true, 00:24:05.373 "data_offset": 0, 00:24:05.373 "data_size": 65536 00:24:05.373 }, 00:24:05.373 { 00:24:05.373 "name": "BaseBdev4", 00:24:05.373 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:05.373 "is_configured": true, 00:24:05.373 "data_offset": 0, 00:24:05.373 "data_size": 65536 00:24:05.373 } 00:24:05.373 ] 00:24:05.373 }' 00:24:05.373 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.373 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 [2024-11-04 14:56:35.614078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:05.939 14:56:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:06.196 [2024-11-04 14:56:36.002058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:06.196 /dev/nbd0 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:06.196 1+0 records in 00:24:06.196 1+0 records out 00:24:06.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336141 s, 12.2 MB/s 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:06.196 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:24:06.197 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:24:07.170 512+0 records in 00:24:07.170 512+0 records out 00:24:07.170 100663296 bytes (101 MB, 96 MiB) copied, 0.652508 s, 154 MB/s 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:07.170 [2024-11-04 14:56:36.979137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.170 [2024-11-04 14:56:36.992489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.170 14:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.170 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.170 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.170 14:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.170 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.170 14:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.170 "name": "raid_bdev1", 00:24:07.170 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:07.170 "strip_size_kb": 64, 00:24:07.170 "state": "online", 00:24:07.170 "raid_level": "raid5f", 00:24:07.170 "superblock": false, 00:24:07.170 "num_base_bdevs": 4, 00:24:07.170 "num_base_bdevs_discovered": 3, 00:24:07.170 "num_base_bdevs_operational": 3, 00:24:07.170 "base_bdevs_list": [ 00:24:07.170 { 00:24:07.170 "name": null, 00:24:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.170 "is_configured": false, 00:24:07.170 "data_offset": 0, 00:24:07.170 "data_size": 65536 00:24:07.171 }, 00:24:07.171 { 00:24:07.171 "name": "BaseBdev2", 00:24:07.171 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:07.171 "is_configured": true, 00:24:07.171 "data_offset": 0, 00:24:07.171 "data_size": 65536 00:24:07.171 }, 00:24:07.171 { 00:24:07.171 "name": "BaseBdev3", 00:24:07.171 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:07.171 "is_configured": true, 00:24:07.171 "data_offset": 0, 00:24:07.171 "data_size": 65536 00:24:07.171 }, 00:24:07.171 { 00:24:07.171 "name": "BaseBdev4", 00:24:07.171 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:07.171 "is_configured": true, 00:24:07.171 "data_offset": 0, 00:24:07.171 "data_size": 65536 00:24:07.171 } 00:24:07.171 ] 00:24:07.171 }' 00:24:07.171 14:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.171 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.739 14:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.739 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.739 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.739 [2024-11-04 14:56:37.508699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.739 [2024-11-04 14:56:37.522174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:24:07.739 14:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.739 14:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:07.739 [2024-11-04 14:56:37.530624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.693 "name": "raid_bdev1", 00:24:08.693 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:08.693 "strip_size_kb": 64, 00:24:08.693 "state": "online", 00:24:08.693 "raid_level": "raid5f", 00:24:08.693 "superblock": false, 00:24:08.693 "num_base_bdevs": 4, 00:24:08.693 "num_base_bdevs_discovered": 4, 00:24:08.693 "num_base_bdevs_operational": 4, 00:24:08.693 "process": { 00:24:08.693 "type": "rebuild", 00:24:08.693 "target": "spare", 00:24:08.693 "progress": { 00:24:08.693 "blocks": 17280, 00:24:08.693 "percent": 8 00:24:08.693 } 00:24:08.693 }, 00:24:08.693 "base_bdevs_list": [ 00:24:08.693 { 00:24:08.693 "name": "spare", 00:24:08.693 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:08.693 "is_configured": true, 00:24:08.693 "data_offset": 0, 00:24:08.693 "data_size": 65536 00:24:08.693 }, 00:24:08.693 { 00:24:08.693 "name": "BaseBdev2", 00:24:08.693 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:08.693 "is_configured": true, 00:24:08.693 "data_offset": 0, 00:24:08.693 "data_size": 65536 00:24:08.693 }, 00:24:08.693 { 00:24:08.693 "name": "BaseBdev3", 00:24:08.693 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:08.693 "is_configured": true, 00:24:08.693 "data_offset": 0, 00:24:08.693 "data_size": 65536 00:24:08.693 }, 00:24:08.693 { 00:24:08.693 "name": "BaseBdev4", 00:24:08.693 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:08.693 "is_configured": true, 00:24:08.693 "data_offset": 0, 00:24:08.693 "data_size": 65536 00:24:08.693 } 00:24:08.693 ] 00:24:08.693 }' 00:24:08.693 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.952 [2024-11-04 14:56:38.676200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.952 [2024-11-04 14:56:38.744482] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:08.952 [2024-11-04 14:56:38.744574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.952 [2024-11-04 14:56:38.744600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.952 [2024-11-04 14:56:38.744614] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.952 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.952 "name": "raid_bdev1", 00:24:08.952 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:08.952 "strip_size_kb": 64, 00:24:08.952 "state": "online", 00:24:08.952 "raid_level": "raid5f", 00:24:08.952 "superblock": false, 00:24:08.953 "num_base_bdevs": 4, 00:24:08.953 "num_base_bdevs_discovered": 3, 00:24:08.953 "num_base_bdevs_operational": 3, 00:24:08.953 "base_bdevs_list": [ 00:24:08.953 { 00:24:08.953 "name": null, 00:24:08.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.953 "is_configured": false, 00:24:08.953 "data_offset": 0, 00:24:08.953 "data_size": 65536 00:24:08.953 }, 00:24:08.953 { 00:24:08.953 "name": "BaseBdev2", 00:24:08.953 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:08.953 "is_configured": true, 00:24:08.953 "data_offset": 0, 00:24:08.953 "data_size": 65536 00:24:08.953 }, 00:24:08.953 { 00:24:08.953 "name": "BaseBdev3", 00:24:08.953 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:08.953 "is_configured": true, 00:24:08.953 "data_offset": 0, 00:24:08.953 "data_size": 65536 00:24:08.953 }, 00:24:08.953 { 00:24:08.953 "name": "BaseBdev4", 00:24:08.953 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:08.953 "is_configured": true, 00:24:08.953 "data_offset": 0, 00:24:08.953 "data_size": 65536 00:24:08.953 } 00:24:08.953 ] 00:24:08.953 }' 00:24:08.953 14:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.953 14:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.520 "name": "raid_bdev1", 00:24:09.520 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:09.520 "strip_size_kb": 64, 00:24:09.520 "state": "online", 00:24:09.520 "raid_level": "raid5f", 00:24:09.520 "superblock": false, 00:24:09.520 "num_base_bdevs": 4, 00:24:09.520 "num_base_bdevs_discovered": 3, 00:24:09.520 "num_base_bdevs_operational": 3, 00:24:09.520 "base_bdevs_list": [ 00:24:09.520 { 00:24:09.520 "name": null, 00:24:09.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.520 "is_configured": false, 00:24:09.520 "data_offset": 0, 00:24:09.520 "data_size": 65536 00:24:09.520 }, 00:24:09.520 { 00:24:09.520 "name": "BaseBdev2", 00:24:09.520 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:09.520 "is_configured": true, 00:24:09.520 "data_offset": 0, 00:24:09.520 "data_size": 65536 00:24:09.520 }, 00:24:09.520 { 00:24:09.520 "name": "BaseBdev3", 00:24:09.520 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:09.520 "is_configured": true, 00:24:09.520 "data_offset": 0, 00:24:09.520 "data_size": 65536 00:24:09.520 }, 00:24:09.520 { 00:24:09.520 "name": "BaseBdev4", 00:24:09.520 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:09.520 "is_configured": true, 00:24:09.520 "data_offset": 0, 00:24:09.520 "data_size": 65536 00:24:09.520 } 00:24:09.520 ] 00:24:09.520 }' 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:09.520 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.779 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:09.779 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:09.779 14:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.779 14:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.779 [2024-11-04 14:56:39.450910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:09.779 [2024-11-04 14:56:39.465384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:24:09.779 14:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.779 14:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:09.779 [2024-11-04 14:56:39.474911] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.713 "name": "raid_bdev1", 00:24:10.713 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:10.713 "strip_size_kb": 64, 00:24:10.713 "state": "online", 00:24:10.713 "raid_level": "raid5f", 00:24:10.713 "superblock": false, 00:24:10.713 "num_base_bdevs": 4, 00:24:10.713 "num_base_bdevs_discovered": 4, 00:24:10.713 "num_base_bdevs_operational": 4, 00:24:10.713 "process": { 00:24:10.713 "type": "rebuild", 00:24:10.713 "target": "spare", 00:24:10.713 "progress": { 00:24:10.713 "blocks": 17280, 00:24:10.714 "percent": 8 00:24:10.714 } 00:24:10.714 }, 00:24:10.714 "base_bdevs_list": [ 00:24:10.714 { 00:24:10.714 "name": "spare", 00:24:10.714 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:10.714 "is_configured": true, 00:24:10.714 "data_offset": 0, 00:24:10.714 "data_size": 65536 00:24:10.714 }, 00:24:10.714 { 00:24:10.714 "name": "BaseBdev2", 00:24:10.714 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:10.714 "is_configured": true, 00:24:10.714 "data_offset": 0, 00:24:10.714 "data_size": 65536 00:24:10.714 }, 00:24:10.714 { 00:24:10.714 "name": "BaseBdev3", 00:24:10.714 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:10.714 "is_configured": true, 00:24:10.714 "data_offset": 0, 00:24:10.714 "data_size": 65536 00:24:10.714 }, 00:24:10.714 { 00:24:10.714 "name": "BaseBdev4", 00:24:10.714 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:10.714 "is_configured": true, 00:24:10.714 "data_offset": 0, 00:24:10.714 "data_size": 65536 00:24:10.714 } 00:24:10.714 ] 00:24:10.714 }' 00:24:10.714 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.714 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.714 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=682 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.972 "name": "raid_bdev1", 00:24:10.972 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:10.972 "strip_size_kb": 64, 00:24:10.972 "state": "online", 00:24:10.972 "raid_level": "raid5f", 00:24:10.972 "superblock": false, 00:24:10.972 "num_base_bdevs": 4, 00:24:10.972 "num_base_bdevs_discovered": 4, 00:24:10.972 "num_base_bdevs_operational": 4, 00:24:10.972 "process": { 00:24:10.972 "type": "rebuild", 00:24:10.972 "target": "spare", 00:24:10.972 "progress": { 00:24:10.972 "blocks": 21120, 00:24:10.972 "percent": 10 00:24:10.972 } 00:24:10.972 }, 00:24:10.972 "base_bdevs_list": [ 00:24:10.972 { 00:24:10.972 "name": "spare", 00:24:10.972 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:10.972 "is_configured": true, 00:24:10.972 "data_offset": 0, 00:24:10.972 "data_size": 65536 00:24:10.972 }, 00:24:10.972 { 00:24:10.972 "name": "BaseBdev2", 00:24:10.972 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:10.972 "is_configured": true, 00:24:10.972 "data_offset": 0, 00:24:10.972 "data_size": 65536 00:24:10.972 }, 00:24:10.972 { 00:24:10.972 "name": "BaseBdev3", 00:24:10.972 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:10.972 "is_configured": true, 00:24:10.972 "data_offset": 0, 00:24:10.972 "data_size": 65536 00:24:10.972 }, 00:24:10.972 { 00:24:10.972 "name": "BaseBdev4", 00:24:10.972 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:10.972 "is_configured": true, 00:24:10.972 "data_offset": 0, 00:24:10.972 "data_size": 65536 00:24:10.972 } 00:24:10.972 ] 00:24:10.972 }' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.972 14:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.347 "name": "raid_bdev1", 00:24:12.347 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:12.347 "strip_size_kb": 64, 00:24:12.347 "state": "online", 00:24:12.347 "raid_level": "raid5f", 00:24:12.347 "superblock": false, 00:24:12.347 "num_base_bdevs": 4, 00:24:12.347 "num_base_bdevs_discovered": 4, 00:24:12.347 "num_base_bdevs_operational": 4, 00:24:12.347 "process": { 00:24:12.347 "type": "rebuild", 00:24:12.347 "target": "spare", 00:24:12.347 "progress": { 00:24:12.347 "blocks": 44160, 00:24:12.347 "percent": 22 00:24:12.347 } 00:24:12.347 }, 00:24:12.347 "base_bdevs_list": [ 00:24:12.347 { 00:24:12.347 "name": "spare", 00:24:12.347 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:12.347 "is_configured": true, 00:24:12.347 "data_offset": 0, 00:24:12.347 "data_size": 65536 00:24:12.347 }, 00:24:12.347 { 00:24:12.347 "name": "BaseBdev2", 00:24:12.347 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:12.347 "is_configured": true, 00:24:12.347 "data_offset": 0, 00:24:12.347 "data_size": 65536 00:24:12.347 }, 00:24:12.347 { 00:24:12.347 "name": "BaseBdev3", 00:24:12.347 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:12.347 "is_configured": true, 00:24:12.347 "data_offset": 0, 00:24:12.347 "data_size": 65536 00:24:12.347 }, 00:24:12.347 { 00:24:12.347 "name": "BaseBdev4", 00:24:12.347 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:12.347 "is_configured": true, 00:24:12.347 "data_offset": 0, 00:24:12.347 "data_size": 65536 00:24:12.347 } 00:24:12.347 ] 00:24:12.347 }' 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.347 14:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.347 14:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.347 14:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.301 "name": "raid_bdev1", 00:24:13.301 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:13.301 "strip_size_kb": 64, 00:24:13.301 "state": "online", 00:24:13.301 "raid_level": "raid5f", 00:24:13.301 "superblock": false, 00:24:13.301 "num_base_bdevs": 4, 00:24:13.301 "num_base_bdevs_discovered": 4, 00:24:13.301 "num_base_bdevs_operational": 4, 00:24:13.301 "process": { 00:24:13.301 "type": "rebuild", 00:24:13.301 "target": "spare", 00:24:13.301 "progress": { 00:24:13.301 "blocks": 67200, 00:24:13.301 "percent": 34 00:24:13.301 } 00:24:13.301 }, 00:24:13.301 "base_bdevs_list": [ 00:24:13.301 { 00:24:13.301 "name": "spare", 00:24:13.301 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:13.301 "is_configured": true, 00:24:13.301 "data_offset": 0, 00:24:13.301 "data_size": 65536 00:24:13.301 }, 00:24:13.301 { 00:24:13.301 "name": "BaseBdev2", 00:24:13.301 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:13.301 "is_configured": true, 00:24:13.301 "data_offset": 0, 00:24:13.301 "data_size": 65536 00:24:13.301 }, 00:24:13.301 { 00:24:13.301 "name": "BaseBdev3", 00:24:13.301 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:13.301 "is_configured": true, 00:24:13.301 "data_offset": 0, 00:24:13.301 "data_size": 65536 00:24:13.301 }, 00:24:13.301 { 00:24:13.301 "name": "BaseBdev4", 00:24:13.301 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:13.301 "is_configured": true, 00:24:13.301 "data_offset": 0, 00:24:13.301 "data_size": 65536 00:24:13.301 } 00:24:13.301 ] 00:24:13.301 }' 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.301 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.575 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.575 14:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.509 "name": "raid_bdev1", 00:24:14.509 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:14.509 "strip_size_kb": 64, 00:24:14.509 "state": "online", 00:24:14.509 "raid_level": "raid5f", 00:24:14.509 "superblock": false, 00:24:14.509 "num_base_bdevs": 4, 00:24:14.509 "num_base_bdevs_discovered": 4, 00:24:14.509 "num_base_bdevs_operational": 4, 00:24:14.509 "process": { 00:24:14.509 "type": "rebuild", 00:24:14.509 "target": "spare", 00:24:14.509 "progress": { 00:24:14.509 "blocks": 88320, 00:24:14.509 "percent": 44 00:24:14.509 } 00:24:14.509 }, 00:24:14.509 "base_bdevs_list": [ 00:24:14.509 { 00:24:14.509 "name": "spare", 00:24:14.509 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:14.509 "is_configured": true, 00:24:14.509 "data_offset": 0, 00:24:14.509 "data_size": 65536 00:24:14.509 }, 00:24:14.509 { 00:24:14.509 "name": "BaseBdev2", 00:24:14.509 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:14.509 "is_configured": true, 00:24:14.509 "data_offset": 0, 00:24:14.509 "data_size": 65536 00:24:14.509 }, 00:24:14.509 { 00:24:14.509 "name": "BaseBdev3", 00:24:14.509 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:14.509 "is_configured": true, 00:24:14.509 "data_offset": 0, 00:24:14.509 "data_size": 65536 00:24:14.509 }, 00:24:14.509 { 00:24:14.509 "name": "BaseBdev4", 00:24:14.509 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:14.509 "is_configured": true, 00:24:14.509 "data_offset": 0, 00:24:14.509 "data_size": 65536 00:24:14.509 } 00:24:14.509 ] 00:24:14.509 }' 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.509 14:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:15.885 "name": "raid_bdev1", 00:24:15.885 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:15.885 "strip_size_kb": 64, 00:24:15.885 "state": "online", 00:24:15.885 "raid_level": "raid5f", 00:24:15.885 "superblock": false, 00:24:15.885 "num_base_bdevs": 4, 00:24:15.885 "num_base_bdevs_discovered": 4, 00:24:15.885 "num_base_bdevs_operational": 4, 00:24:15.885 "process": { 00:24:15.885 "type": "rebuild", 00:24:15.885 "target": "spare", 00:24:15.885 "progress": { 00:24:15.885 "blocks": 111360, 00:24:15.885 "percent": 56 00:24:15.885 } 00:24:15.885 }, 00:24:15.885 "base_bdevs_list": [ 00:24:15.885 { 00:24:15.885 "name": "spare", 00:24:15.885 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:15.885 "is_configured": true, 00:24:15.885 "data_offset": 0, 00:24:15.885 "data_size": 65536 00:24:15.885 }, 00:24:15.885 { 00:24:15.885 "name": "BaseBdev2", 00:24:15.885 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:15.885 "is_configured": true, 00:24:15.885 "data_offset": 0, 00:24:15.885 "data_size": 65536 00:24:15.885 }, 00:24:15.885 { 00:24:15.885 "name": "BaseBdev3", 00:24:15.885 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:15.885 "is_configured": true, 00:24:15.885 "data_offset": 0, 00:24:15.885 "data_size": 65536 00:24:15.885 }, 00:24:15.885 { 00:24:15.885 "name": "BaseBdev4", 00:24:15.885 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:15.885 "is_configured": true, 00:24:15.885 "data_offset": 0, 00:24:15.885 "data_size": 65536 00:24:15.885 } 00:24:15.885 ] 00:24:15.885 }' 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.885 14:56:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.820 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.820 "name": "raid_bdev1", 00:24:16.820 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:16.820 "strip_size_kb": 64, 00:24:16.820 "state": "online", 00:24:16.820 "raid_level": "raid5f", 00:24:16.820 "superblock": false, 00:24:16.820 "num_base_bdevs": 4, 00:24:16.820 "num_base_bdevs_discovered": 4, 00:24:16.820 "num_base_bdevs_operational": 4, 00:24:16.820 "process": { 00:24:16.820 "type": "rebuild", 00:24:16.820 "target": "spare", 00:24:16.820 "progress": { 00:24:16.820 "blocks": 132480, 00:24:16.820 "percent": 67 00:24:16.820 } 00:24:16.820 }, 00:24:16.820 "base_bdevs_list": [ 00:24:16.820 { 00:24:16.820 "name": "spare", 00:24:16.820 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:16.820 "is_configured": true, 00:24:16.820 "data_offset": 0, 00:24:16.820 "data_size": 65536 00:24:16.820 }, 00:24:16.820 { 00:24:16.820 "name": "BaseBdev2", 00:24:16.820 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:16.820 "is_configured": true, 00:24:16.820 "data_offset": 0, 00:24:16.820 "data_size": 65536 00:24:16.820 }, 00:24:16.820 { 00:24:16.820 "name": "BaseBdev3", 00:24:16.820 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:16.820 "is_configured": true, 00:24:16.820 "data_offset": 0, 00:24:16.820 "data_size": 65536 00:24:16.820 }, 00:24:16.820 { 00:24:16.820 "name": "BaseBdev4", 00:24:16.820 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:16.820 "is_configured": true, 00:24:16.820 "data_offset": 0, 00:24:16.820 "data_size": 65536 00:24:16.820 } 00:24:16.820 ] 00:24:16.821 }' 00:24:16.821 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.821 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.821 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.821 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.821 14:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.196 "name": "raid_bdev1", 00:24:18.196 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:18.196 "strip_size_kb": 64, 00:24:18.196 "state": "online", 00:24:18.196 "raid_level": "raid5f", 00:24:18.196 "superblock": false, 00:24:18.196 "num_base_bdevs": 4, 00:24:18.196 "num_base_bdevs_discovered": 4, 00:24:18.196 "num_base_bdevs_operational": 4, 00:24:18.196 "process": { 00:24:18.196 "type": "rebuild", 00:24:18.196 "target": "spare", 00:24:18.196 "progress": { 00:24:18.196 "blocks": 155520, 00:24:18.196 "percent": 79 00:24:18.196 } 00:24:18.196 }, 00:24:18.196 "base_bdevs_list": [ 00:24:18.196 { 00:24:18.196 "name": "spare", 00:24:18.196 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:18.196 "is_configured": true, 00:24:18.196 "data_offset": 0, 00:24:18.196 "data_size": 65536 00:24:18.196 }, 00:24:18.196 { 00:24:18.196 "name": "BaseBdev2", 00:24:18.196 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:18.196 "is_configured": true, 00:24:18.196 "data_offset": 0, 00:24:18.196 "data_size": 65536 00:24:18.196 }, 00:24:18.196 { 00:24:18.196 "name": "BaseBdev3", 00:24:18.196 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:18.196 "is_configured": true, 00:24:18.196 "data_offset": 0, 00:24:18.196 "data_size": 65536 00:24:18.196 }, 00:24:18.196 { 00:24:18.196 "name": "BaseBdev4", 00:24:18.196 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:18.196 "is_configured": true, 00:24:18.196 "data_offset": 0, 00:24:18.196 "data_size": 65536 00:24:18.196 } 00:24:18.196 ] 00:24:18.196 }' 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.196 14:56:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.141 "name": "raid_bdev1", 00:24:19.141 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:19.141 "strip_size_kb": 64, 00:24:19.141 "state": "online", 00:24:19.141 "raid_level": "raid5f", 00:24:19.141 "superblock": false, 00:24:19.141 "num_base_bdevs": 4, 00:24:19.141 "num_base_bdevs_discovered": 4, 00:24:19.141 "num_base_bdevs_operational": 4, 00:24:19.141 "process": { 00:24:19.141 "type": "rebuild", 00:24:19.141 "target": "spare", 00:24:19.141 "progress": { 00:24:19.141 "blocks": 178560, 00:24:19.141 "percent": 90 00:24:19.141 } 00:24:19.141 }, 00:24:19.141 "base_bdevs_list": [ 00:24:19.141 { 00:24:19.141 "name": "spare", 00:24:19.141 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 }, 00:24:19.141 { 00:24:19.141 "name": "BaseBdev2", 00:24:19.141 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 }, 00:24:19.141 { 00:24:19.141 "name": "BaseBdev3", 00:24:19.141 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 }, 00:24:19.141 { 00:24:19.141 "name": "BaseBdev4", 00:24:19.141 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 } 00:24:19.141 ] 00:24:19.141 }' 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.141 14:56:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.412 14:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.412 14:56:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:20.346 [2024-11-04 14:56:49.898441] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:20.347 [2024-11-04 14:56:49.898595] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:20.347 [2024-11-04 14:56:49.898674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.347 "name": "raid_bdev1", 00:24:20.347 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:20.347 "strip_size_kb": 64, 00:24:20.347 "state": "online", 00:24:20.347 "raid_level": "raid5f", 00:24:20.347 "superblock": false, 00:24:20.347 "num_base_bdevs": 4, 00:24:20.347 "num_base_bdevs_discovered": 4, 00:24:20.347 "num_base_bdevs_operational": 4, 00:24:20.347 "base_bdevs_list": [ 00:24:20.347 { 00:24:20.347 "name": "spare", 00:24:20.347 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:20.347 "is_configured": true, 00:24:20.347 "data_offset": 0, 00:24:20.347 "data_size": 65536 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "name": "BaseBdev2", 00:24:20.347 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:20.347 "is_configured": true, 00:24:20.347 "data_offset": 0, 00:24:20.347 "data_size": 65536 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "name": "BaseBdev3", 00:24:20.347 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:20.347 "is_configured": true, 00:24:20.347 "data_offset": 0, 00:24:20.347 "data_size": 65536 00:24:20.347 }, 00:24:20.347 { 00:24:20.347 "name": "BaseBdev4", 00:24:20.347 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:20.347 "is_configured": true, 00:24:20.347 "data_offset": 0, 00:24:20.347 "data_size": 65536 00:24:20.347 } 00:24:20.347 ] 00:24:20.347 }' 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.347 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.605 "name": "raid_bdev1", 00:24:20.605 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:20.605 "strip_size_kb": 64, 00:24:20.605 "state": "online", 00:24:20.605 "raid_level": "raid5f", 00:24:20.605 "superblock": false, 00:24:20.605 "num_base_bdevs": 4, 00:24:20.605 "num_base_bdevs_discovered": 4, 00:24:20.605 "num_base_bdevs_operational": 4, 00:24:20.605 "base_bdevs_list": [ 00:24:20.605 { 00:24:20.605 "name": "spare", 00:24:20.605 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:20.605 "is_configured": true, 00:24:20.605 "data_offset": 0, 00:24:20.605 "data_size": 65536 00:24:20.605 }, 00:24:20.605 { 00:24:20.605 "name": "BaseBdev2", 00:24:20.605 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:20.605 "is_configured": true, 00:24:20.605 "data_offset": 0, 00:24:20.605 "data_size": 65536 00:24:20.605 }, 00:24:20.605 { 00:24:20.605 "name": "BaseBdev3", 00:24:20.605 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:20.605 "is_configured": true, 00:24:20.605 "data_offset": 0, 00:24:20.605 "data_size": 65536 00:24:20.605 }, 00:24:20.605 { 00:24:20.605 "name": "BaseBdev4", 00:24:20.605 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:20.605 "is_configured": true, 00:24:20.605 "data_offset": 0, 00:24:20.605 "data_size": 65536 00:24:20.605 } 00:24:20.605 ] 00:24:20.605 }' 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.605 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.605 "name": "raid_bdev1", 00:24:20.606 "uuid": "982ef55a-5f45-40c2-bb81-ff2763ffc201", 00:24:20.606 "strip_size_kb": 64, 00:24:20.606 "state": "online", 00:24:20.606 "raid_level": "raid5f", 00:24:20.606 "superblock": false, 00:24:20.606 "num_base_bdevs": 4, 00:24:20.606 "num_base_bdevs_discovered": 4, 00:24:20.606 "num_base_bdevs_operational": 4, 00:24:20.606 "base_bdevs_list": [ 00:24:20.606 { 00:24:20.606 "name": "spare", 00:24:20.606 "uuid": "779e6f68-72ca-5168-82a8-3412359cd99c", 00:24:20.606 "is_configured": true, 00:24:20.606 "data_offset": 0, 00:24:20.606 "data_size": 65536 00:24:20.606 }, 00:24:20.606 { 00:24:20.606 "name": "BaseBdev2", 00:24:20.606 "uuid": "5dc6b1a0-375e-540f-8cb7-0efa7ecf56cc", 00:24:20.606 "is_configured": true, 00:24:20.606 "data_offset": 0, 00:24:20.606 "data_size": 65536 00:24:20.606 }, 00:24:20.606 { 00:24:20.606 "name": "BaseBdev3", 00:24:20.606 "uuid": "8bc849cc-941f-5236-8266-1bee0d43c846", 00:24:20.606 "is_configured": true, 00:24:20.606 "data_offset": 0, 00:24:20.606 "data_size": 65536 00:24:20.606 }, 00:24:20.606 { 00:24:20.606 "name": "BaseBdev4", 00:24:20.606 "uuid": "619e5aa8-ab28-5ef1-b8d5-696587d79c6c", 00:24:20.606 "is_configured": true, 00:24:20.606 "data_offset": 0, 00:24:20.606 "data_size": 65536 00:24:20.606 } 00:24:20.606 ] 00:24:20.606 }' 00:24:20.606 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.606 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 [2024-11-04 14:56:50.934941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:21.172 [2024-11-04 14:56:50.935011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:21.172 [2024-11-04 14:56:50.935148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:21.172 [2024-11-04 14:56:50.935366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:21.172 [2024-11-04 14:56:50.935400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.172 14:56:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:21.430 /dev/nbd0 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:21.688 1+0 records in 00:24:21.688 1+0 records out 00:24:21.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338283 s, 12.1 MB/s 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.688 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:21.947 /dev/nbd1 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:21.947 1+0 records in 00:24:21.947 1+0 records out 00:24:21.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407989 s, 10.0 MB/s 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.947 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.205 14:56:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.464 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85168 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85168 ']' 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85168 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85168 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:22.723 killing process with pid 85168 00:24:22.723 Received shutdown signal, test time was about 60.000000 seconds 00:24:22.723 00:24:22.723 Latency(us) 00:24:22.723 [2024-11-04T14:56:52.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.723 [2024-11-04T14:56:52.615Z] =================================================================================================================== 00:24:22.723 [2024-11-04T14:56:52.615Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85168' 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 85168 00:24:22.723 14:56:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 85168 00:24:22.723 [2024-11-04 14:56:52.580098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:23.289 [2024-11-04 14:56:53.059472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:24:24.272 00:24:24.272 real 0m20.470s 00:24:24.272 user 0m25.448s 00:24:24.272 sys 0m2.495s 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.272 ************************************ 00:24:24.272 END TEST raid5f_rebuild_test 00:24:24.272 ************************************ 00:24:24.272 14:56:54 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:24:24.272 14:56:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:24.272 14:56:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:24.272 14:56:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:24.272 ************************************ 00:24:24.272 START TEST raid5f_rebuild_test_sb 00:24:24.272 ************************************ 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85673 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85673 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85673 ']' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.272 14:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.531 [2024-11-04 14:56:54.236211] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:24:24.531 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:24.531 Zero copy mechanism will not be used. 00:24:24.531 [2024-11-04 14:56:54.236431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85673 ] 00:24:24.790 [2024-11-04 14:56:54.427563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.790 [2024-11-04 14:56:54.567901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.048 [2024-11-04 14:56:54.772632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.048 [2024-11-04 14:56:54.772728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 BaseBdev1_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 [2024-11-04 14:56:55.266135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:25.615 [2024-11-04 14:56:55.266279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.615 [2024-11-04 14:56:55.266330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:25.615 [2024-11-04 14:56:55.266350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.615 [2024-11-04 14:56:55.269358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.615 [2024-11-04 14:56:55.269420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:25.615 BaseBdev1 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 BaseBdev2_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 [2024-11-04 14:56:55.323061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:25.615 [2024-11-04 14:56:55.323159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.615 [2024-11-04 14:56:55.323189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:25.615 [2024-11-04 14:56:55.323209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.615 [2024-11-04 14:56:55.326110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.615 [2024-11-04 14:56:55.326167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:25.615 BaseBdev2 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 BaseBdev3_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 [2024-11-04 14:56:55.389288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:25.615 [2024-11-04 14:56:55.389383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.615 [2024-11-04 14:56:55.389417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:25.615 [2024-11-04 14:56:55.389436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.615 [2024-11-04 14:56:55.392312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.615 [2024-11-04 14:56:55.392389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:25.615 BaseBdev3 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 BaseBdev4_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 [2024-11-04 14:56:55.443336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:25.615 [2024-11-04 14:56:55.443437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.615 [2024-11-04 14:56:55.443471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:25.615 [2024-11-04 14:56:55.443491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.615 [2024-11-04 14:56:55.446676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.615 [2024-11-04 14:56:55.446755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:25.615 BaseBdev4 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 spare_malloc 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.615 spare_delay 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.615 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.874 [2024-11-04 14:56:55.510016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:25.874 [2024-11-04 14:56:55.510100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.874 [2024-11-04 14:56:55.510128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:25.874 [2024-11-04 14:56:55.510146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.874 [2024-11-04 14:56:55.513139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.874 [2024-11-04 14:56:55.513214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:25.874 spare 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.874 [2024-11-04 14:56:55.522164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.874 [2024-11-04 14:56:55.524756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:25.874 [2024-11-04 14:56:55.524861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:25.874 [2024-11-04 14:56:55.524937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:25.874 [2024-11-04 14:56:55.525261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:25.874 [2024-11-04 14:56:55.525296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:25.874 [2024-11-04 14:56:55.525651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:25.874 [2024-11-04 14:56:55.532910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:25.874 [2024-11-04 14:56:55.532939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:25.874 [2024-11-04 14:56:55.533263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.874 "name": "raid_bdev1", 00:24:25.874 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:25.874 "strip_size_kb": 64, 00:24:25.874 "state": "online", 00:24:25.874 "raid_level": "raid5f", 00:24:25.874 "superblock": true, 00:24:25.874 "num_base_bdevs": 4, 00:24:25.874 "num_base_bdevs_discovered": 4, 00:24:25.874 "num_base_bdevs_operational": 4, 00:24:25.874 "base_bdevs_list": [ 00:24:25.874 { 00:24:25.874 "name": "BaseBdev1", 00:24:25.874 "uuid": "213231d8-05bb-5b48-a8f0-96abf3f10e69", 00:24:25.874 "is_configured": true, 00:24:25.874 "data_offset": 2048, 00:24:25.874 "data_size": 63488 00:24:25.874 }, 00:24:25.874 { 00:24:25.874 "name": "BaseBdev2", 00:24:25.874 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:25.874 "is_configured": true, 00:24:25.874 "data_offset": 2048, 00:24:25.874 "data_size": 63488 00:24:25.874 }, 00:24:25.874 { 00:24:25.874 "name": "BaseBdev3", 00:24:25.874 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:25.874 "is_configured": true, 00:24:25.874 "data_offset": 2048, 00:24:25.874 "data_size": 63488 00:24:25.874 }, 00:24:25.874 { 00:24:25.874 "name": "BaseBdev4", 00:24:25.874 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:25.874 "is_configured": true, 00:24:25.874 "data_offset": 2048, 00:24:25.874 "data_size": 63488 00:24:25.874 } 00:24:25.874 ] 00:24:25.874 }' 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.874 14:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 [2024-11-04 14:56:56.066056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:26.442 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:26.701 [2024-11-04 14:56:56.449884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:26.701 /dev/nbd0 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.701 1+0 records in 00:24:26.701 1+0 records out 00:24:26.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216016 s, 19.0 MB/s 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:24:26.701 14:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:27.268 496+0 records in 00:24:27.268 496+0 records out 00:24:27.268 97517568 bytes (98 MB, 93 MiB) copied, 0.599002 s, 163 MB/s 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:27.268 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:27.526 [2024-11-04 14:56:57.359303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.526 [2024-11-04 14:56:57.395931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.526 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.527 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.527 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.527 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.527 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.784 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.784 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.784 "name": "raid_bdev1", 00:24:27.784 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:27.784 "strip_size_kb": 64, 00:24:27.784 "state": "online", 00:24:27.784 "raid_level": "raid5f", 00:24:27.784 "superblock": true, 00:24:27.784 "num_base_bdevs": 4, 00:24:27.784 "num_base_bdevs_discovered": 3, 00:24:27.784 "num_base_bdevs_operational": 3, 00:24:27.784 "base_bdevs_list": [ 00:24:27.784 { 00:24:27.784 "name": null, 00:24:27.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.784 "is_configured": false, 00:24:27.784 "data_offset": 0, 00:24:27.784 "data_size": 63488 00:24:27.784 }, 00:24:27.784 { 00:24:27.784 "name": "BaseBdev2", 00:24:27.784 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:27.784 "is_configured": true, 00:24:27.784 "data_offset": 2048, 00:24:27.784 "data_size": 63488 00:24:27.784 }, 00:24:27.784 { 00:24:27.784 "name": "BaseBdev3", 00:24:27.784 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:27.784 "is_configured": true, 00:24:27.784 "data_offset": 2048, 00:24:27.784 "data_size": 63488 00:24:27.784 }, 00:24:27.784 { 00:24:27.784 "name": "BaseBdev4", 00:24:27.784 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:27.784 "is_configured": true, 00:24:27.784 "data_offset": 2048, 00:24:27.784 "data_size": 63488 00:24:27.785 } 00:24:27.785 ] 00:24:27.785 }' 00:24:27.785 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.785 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.044 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:28.044 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.044 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.044 [2024-11-04 14:56:57.912168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:28.044 [2024-11-04 14:56:57.927065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:24:28.044 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.044 14:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:28.302 [2024-11-04 14:56:57.937176] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:29.236 "name": "raid_bdev1", 00:24:29.236 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:29.236 "strip_size_kb": 64, 00:24:29.236 "state": "online", 00:24:29.236 "raid_level": "raid5f", 00:24:29.236 "superblock": true, 00:24:29.236 "num_base_bdevs": 4, 00:24:29.236 "num_base_bdevs_discovered": 4, 00:24:29.236 "num_base_bdevs_operational": 4, 00:24:29.236 "process": { 00:24:29.236 "type": "rebuild", 00:24:29.236 "target": "spare", 00:24:29.236 "progress": { 00:24:29.236 "blocks": 17280, 00:24:29.236 "percent": 9 00:24:29.236 } 00:24:29.236 }, 00:24:29.236 "base_bdevs_list": [ 00:24:29.236 { 00:24:29.236 "name": "spare", 00:24:29.236 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:29.236 "is_configured": true, 00:24:29.236 "data_offset": 2048, 00:24:29.236 "data_size": 63488 00:24:29.236 }, 00:24:29.236 { 00:24:29.236 "name": "BaseBdev2", 00:24:29.236 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:29.236 "is_configured": true, 00:24:29.236 "data_offset": 2048, 00:24:29.236 "data_size": 63488 00:24:29.236 }, 00:24:29.236 { 00:24:29.236 "name": "BaseBdev3", 00:24:29.236 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:29.236 "is_configured": true, 00:24:29.236 "data_offset": 2048, 00:24:29.236 "data_size": 63488 00:24:29.236 }, 00:24:29.236 { 00:24:29.236 "name": "BaseBdev4", 00:24:29.236 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:29.236 "is_configured": true, 00:24:29.236 "data_offset": 2048, 00:24:29.236 "data_size": 63488 00:24:29.236 } 00:24:29.236 ] 00:24:29.236 }' 00:24:29.236 14:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:29.236 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.236 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:29.236 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.236 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:29.236 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.236 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.236 [2024-11-04 14:56:59.099639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:29.495 [2024-11-04 14:56:59.151590] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:29.495 [2024-11-04 14:56:59.151667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.495 [2024-11-04 14:56:59.151692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:29.495 [2024-11-04 14:56:59.151711] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.495 "name": "raid_bdev1", 00:24:29.495 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:29.495 "strip_size_kb": 64, 00:24:29.495 "state": "online", 00:24:29.495 "raid_level": "raid5f", 00:24:29.495 "superblock": true, 00:24:29.495 "num_base_bdevs": 4, 00:24:29.495 "num_base_bdevs_discovered": 3, 00:24:29.495 "num_base_bdevs_operational": 3, 00:24:29.495 "base_bdevs_list": [ 00:24:29.495 { 00:24:29.495 "name": null, 00:24:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.495 "is_configured": false, 00:24:29.495 "data_offset": 0, 00:24:29.495 "data_size": 63488 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "name": "BaseBdev2", 00:24:29.495 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 2048, 00:24:29.495 "data_size": 63488 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "name": "BaseBdev3", 00:24:29.495 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 2048, 00:24:29.495 "data_size": 63488 00:24:29.495 }, 00:24:29.495 { 00:24:29.495 "name": "BaseBdev4", 00:24:29.495 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:29.495 "is_configured": true, 00:24:29.495 "data_offset": 2048, 00:24:29.495 "data_size": 63488 00:24:29.495 } 00:24:29.495 ] 00:24:29.495 }' 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.495 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:30.061 "name": "raid_bdev1", 00:24:30.061 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:30.061 "strip_size_kb": 64, 00:24:30.061 "state": "online", 00:24:30.061 "raid_level": "raid5f", 00:24:30.061 "superblock": true, 00:24:30.061 "num_base_bdevs": 4, 00:24:30.061 "num_base_bdevs_discovered": 3, 00:24:30.061 "num_base_bdevs_operational": 3, 00:24:30.061 "base_bdevs_list": [ 00:24:30.061 { 00:24:30.061 "name": null, 00:24:30.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.061 "is_configured": false, 00:24:30.061 "data_offset": 0, 00:24:30.061 "data_size": 63488 00:24:30.061 }, 00:24:30.061 { 00:24:30.061 "name": "BaseBdev2", 00:24:30.061 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:30.061 "is_configured": true, 00:24:30.061 "data_offset": 2048, 00:24:30.061 "data_size": 63488 00:24:30.061 }, 00:24:30.061 { 00:24:30.061 "name": "BaseBdev3", 00:24:30.061 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:30.061 "is_configured": true, 00:24:30.061 "data_offset": 2048, 00:24:30.061 "data_size": 63488 00:24:30.061 }, 00:24:30.061 { 00:24:30.061 "name": "BaseBdev4", 00:24:30.061 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:30.061 "is_configured": true, 00:24:30.061 "data_offset": 2048, 00:24:30.061 "data_size": 63488 00:24:30.061 } 00:24:30.061 ] 00:24:30.061 }' 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:30.061 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:30.062 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:30.062 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.062 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.062 [2024-11-04 14:56:59.896853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:30.062 [2024-11-04 14:56:59.910478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:24:30.062 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.062 14:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:30.062 [2024-11-04 14:56:59.919545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.436 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:31.436 "name": "raid_bdev1", 00:24:31.436 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:31.436 "strip_size_kb": 64, 00:24:31.436 "state": "online", 00:24:31.436 "raid_level": "raid5f", 00:24:31.436 "superblock": true, 00:24:31.436 "num_base_bdevs": 4, 00:24:31.436 "num_base_bdevs_discovered": 4, 00:24:31.436 "num_base_bdevs_operational": 4, 00:24:31.436 "process": { 00:24:31.436 "type": "rebuild", 00:24:31.436 "target": "spare", 00:24:31.436 "progress": { 00:24:31.436 "blocks": 17280, 00:24:31.436 "percent": 9 00:24:31.436 } 00:24:31.436 }, 00:24:31.436 "base_bdevs_list": [ 00:24:31.436 { 00:24:31.436 "name": "spare", 00:24:31.436 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:31.436 "is_configured": true, 00:24:31.436 "data_offset": 2048, 00:24:31.436 "data_size": 63488 00:24:31.436 }, 00:24:31.436 { 00:24:31.436 "name": "BaseBdev2", 00:24:31.436 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:31.436 "is_configured": true, 00:24:31.436 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 }, 00:24:31.437 { 00:24:31.437 "name": "BaseBdev3", 00:24:31.437 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:31.437 "is_configured": true, 00:24:31.437 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 }, 00:24:31.437 { 00:24:31.437 "name": "BaseBdev4", 00:24:31.437 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:31.437 "is_configured": true, 00:24:31.437 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 } 00:24:31.437 ] 00:24:31.437 }' 00:24:31.437 14:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:31.437 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=703 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:31.437 "name": "raid_bdev1", 00:24:31.437 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:31.437 "strip_size_kb": 64, 00:24:31.437 "state": "online", 00:24:31.437 "raid_level": "raid5f", 00:24:31.437 "superblock": true, 00:24:31.437 "num_base_bdevs": 4, 00:24:31.437 "num_base_bdevs_discovered": 4, 00:24:31.437 "num_base_bdevs_operational": 4, 00:24:31.437 "process": { 00:24:31.437 "type": "rebuild", 00:24:31.437 "target": "spare", 00:24:31.437 "progress": { 00:24:31.437 "blocks": 21120, 00:24:31.437 "percent": 11 00:24:31.437 } 00:24:31.437 }, 00:24:31.437 "base_bdevs_list": [ 00:24:31.437 { 00:24:31.437 "name": "spare", 00:24:31.437 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:31.437 "is_configured": true, 00:24:31.437 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 }, 00:24:31.437 { 00:24:31.437 "name": "BaseBdev2", 00:24:31.437 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:31.437 "is_configured": true, 00:24:31.437 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 }, 00:24:31.437 { 00:24:31.437 "name": "BaseBdev3", 00:24:31.437 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:31.437 "is_configured": true, 00:24:31.437 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 }, 00:24:31.437 { 00:24:31.437 "name": "BaseBdev4", 00:24:31.437 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:31.437 "is_configured": true, 00:24:31.437 "data_offset": 2048, 00:24:31.437 "data_size": 63488 00:24:31.437 } 00:24:31.437 ] 00:24:31.437 }' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.437 14:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.371 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.629 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:32.629 "name": "raid_bdev1", 00:24:32.629 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:32.629 "strip_size_kb": 64, 00:24:32.629 "state": "online", 00:24:32.629 "raid_level": "raid5f", 00:24:32.629 "superblock": true, 00:24:32.629 "num_base_bdevs": 4, 00:24:32.629 "num_base_bdevs_discovered": 4, 00:24:32.629 "num_base_bdevs_operational": 4, 00:24:32.629 "process": { 00:24:32.629 "type": "rebuild", 00:24:32.629 "target": "spare", 00:24:32.629 "progress": { 00:24:32.629 "blocks": 42240, 00:24:32.629 "percent": 22 00:24:32.629 } 00:24:32.629 }, 00:24:32.629 "base_bdevs_list": [ 00:24:32.629 { 00:24:32.629 "name": "spare", 00:24:32.629 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:32.629 "is_configured": true, 00:24:32.629 "data_offset": 2048, 00:24:32.629 "data_size": 63488 00:24:32.629 }, 00:24:32.629 { 00:24:32.629 "name": "BaseBdev2", 00:24:32.629 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:32.629 "is_configured": true, 00:24:32.629 "data_offset": 2048, 00:24:32.629 "data_size": 63488 00:24:32.629 }, 00:24:32.629 { 00:24:32.629 "name": "BaseBdev3", 00:24:32.629 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:32.629 "is_configured": true, 00:24:32.629 "data_offset": 2048, 00:24:32.629 "data_size": 63488 00:24:32.629 }, 00:24:32.629 { 00:24:32.629 "name": "BaseBdev4", 00:24:32.629 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:32.629 "is_configured": true, 00:24:32.629 "data_offset": 2048, 00:24:32.629 "data_size": 63488 00:24:32.629 } 00:24:32.629 ] 00:24:32.629 }' 00:24:32.629 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:32.629 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.629 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:32.629 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.629 14:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:33.560 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:33.560 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:33.560 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.561 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.838 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:33.838 "name": "raid_bdev1", 00:24:33.838 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:33.838 "strip_size_kb": 64, 00:24:33.838 "state": "online", 00:24:33.838 "raid_level": "raid5f", 00:24:33.838 "superblock": true, 00:24:33.838 "num_base_bdevs": 4, 00:24:33.838 "num_base_bdevs_discovered": 4, 00:24:33.838 "num_base_bdevs_operational": 4, 00:24:33.838 "process": { 00:24:33.838 "type": "rebuild", 00:24:33.838 "target": "spare", 00:24:33.838 "progress": { 00:24:33.838 "blocks": 65280, 00:24:33.838 "percent": 34 00:24:33.838 } 00:24:33.838 }, 00:24:33.838 "base_bdevs_list": [ 00:24:33.838 { 00:24:33.838 "name": "spare", 00:24:33.838 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:33.838 "is_configured": true, 00:24:33.838 "data_offset": 2048, 00:24:33.838 "data_size": 63488 00:24:33.838 }, 00:24:33.838 { 00:24:33.838 "name": "BaseBdev2", 00:24:33.838 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:33.838 "is_configured": true, 00:24:33.838 "data_offset": 2048, 00:24:33.838 "data_size": 63488 00:24:33.838 }, 00:24:33.838 { 00:24:33.838 "name": "BaseBdev3", 00:24:33.838 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:33.838 "is_configured": true, 00:24:33.838 "data_offset": 2048, 00:24:33.838 "data_size": 63488 00:24:33.838 }, 00:24:33.838 { 00:24:33.838 "name": "BaseBdev4", 00:24:33.838 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:33.838 "is_configured": true, 00:24:33.838 "data_offset": 2048, 00:24:33.838 "data_size": 63488 00:24:33.838 } 00:24:33.838 ] 00:24:33.838 }' 00:24:33.838 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:33.838 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:33.838 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:33.838 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:33.838 14:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.795 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:34.795 "name": "raid_bdev1", 00:24:34.795 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:34.795 "strip_size_kb": 64, 00:24:34.795 "state": "online", 00:24:34.795 "raid_level": "raid5f", 00:24:34.795 "superblock": true, 00:24:34.795 "num_base_bdevs": 4, 00:24:34.795 "num_base_bdevs_discovered": 4, 00:24:34.795 "num_base_bdevs_operational": 4, 00:24:34.795 "process": { 00:24:34.795 "type": "rebuild", 00:24:34.795 "target": "spare", 00:24:34.795 "progress": { 00:24:34.795 "blocks": 88320, 00:24:34.795 "percent": 46 00:24:34.795 } 00:24:34.795 }, 00:24:34.795 "base_bdevs_list": [ 00:24:34.795 { 00:24:34.795 "name": "spare", 00:24:34.795 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:34.795 "is_configured": true, 00:24:34.795 "data_offset": 2048, 00:24:34.795 "data_size": 63488 00:24:34.795 }, 00:24:34.795 { 00:24:34.795 "name": "BaseBdev2", 00:24:34.795 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:34.795 "is_configured": true, 00:24:34.795 "data_offset": 2048, 00:24:34.795 "data_size": 63488 00:24:34.796 }, 00:24:34.796 { 00:24:34.796 "name": "BaseBdev3", 00:24:34.796 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:34.796 "is_configured": true, 00:24:34.796 "data_offset": 2048, 00:24:34.796 "data_size": 63488 00:24:34.796 }, 00:24:34.796 { 00:24:34.796 "name": "BaseBdev4", 00:24:34.796 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:34.796 "is_configured": true, 00:24:34.796 "data_offset": 2048, 00:24:34.796 "data_size": 63488 00:24:34.796 } 00:24:34.796 ] 00:24:34.796 }' 00:24:34.796 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:34.796 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.796 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:35.054 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.054 14:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:35.989 "name": "raid_bdev1", 00:24:35.989 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:35.989 "strip_size_kb": 64, 00:24:35.989 "state": "online", 00:24:35.989 "raid_level": "raid5f", 00:24:35.989 "superblock": true, 00:24:35.989 "num_base_bdevs": 4, 00:24:35.989 "num_base_bdevs_discovered": 4, 00:24:35.989 "num_base_bdevs_operational": 4, 00:24:35.989 "process": { 00:24:35.989 "type": "rebuild", 00:24:35.989 "target": "spare", 00:24:35.989 "progress": { 00:24:35.989 "blocks": 109440, 00:24:35.989 "percent": 57 00:24:35.989 } 00:24:35.989 }, 00:24:35.989 "base_bdevs_list": [ 00:24:35.989 { 00:24:35.989 "name": "spare", 00:24:35.989 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:35.989 "is_configured": true, 00:24:35.989 "data_offset": 2048, 00:24:35.989 "data_size": 63488 00:24:35.989 }, 00:24:35.989 { 00:24:35.989 "name": "BaseBdev2", 00:24:35.989 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:35.989 "is_configured": true, 00:24:35.989 "data_offset": 2048, 00:24:35.989 "data_size": 63488 00:24:35.989 }, 00:24:35.989 { 00:24:35.989 "name": "BaseBdev3", 00:24:35.989 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:35.989 "is_configured": true, 00:24:35.989 "data_offset": 2048, 00:24:35.989 "data_size": 63488 00:24:35.989 }, 00:24:35.989 { 00:24:35.989 "name": "BaseBdev4", 00:24:35.989 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:35.989 "is_configured": true, 00:24:35.989 "data_offset": 2048, 00:24:35.989 "data_size": 63488 00:24:35.989 } 00:24:35.989 ] 00:24:35.989 }' 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.989 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:36.247 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:36.247 14:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.183 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.184 "name": "raid_bdev1", 00:24:37.184 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:37.184 "strip_size_kb": 64, 00:24:37.184 "state": "online", 00:24:37.184 "raid_level": "raid5f", 00:24:37.184 "superblock": true, 00:24:37.184 "num_base_bdevs": 4, 00:24:37.184 "num_base_bdevs_discovered": 4, 00:24:37.184 "num_base_bdevs_operational": 4, 00:24:37.184 "process": { 00:24:37.184 "type": "rebuild", 00:24:37.184 "target": "spare", 00:24:37.184 "progress": { 00:24:37.184 "blocks": 132480, 00:24:37.184 "percent": 69 00:24:37.184 } 00:24:37.184 }, 00:24:37.184 "base_bdevs_list": [ 00:24:37.184 { 00:24:37.184 "name": "spare", 00:24:37.184 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:37.184 "is_configured": true, 00:24:37.184 "data_offset": 2048, 00:24:37.184 "data_size": 63488 00:24:37.184 }, 00:24:37.184 { 00:24:37.184 "name": "BaseBdev2", 00:24:37.184 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:37.184 "is_configured": true, 00:24:37.184 "data_offset": 2048, 00:24:37.184 "data_size": 63488 00:24:37.184 }, 00:24:37.184 { 00:24:37.184 "name": "BaseBdev3", 00:24:37.184 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:37.184 "is_configured": true, 00:24:37.184 "data_offset": 2048, 00:24:37.184 "data_size": 63488 00:24:37.184 }, 00:24:37.184 { 00:24:37.184 "name": "BaseBdev4", 00:24:37.184 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:37.184 "is_configured": true, 00:24:37.184 "data_offset": 2048, 00:24:37.184 "data_size": 63488 00:24:37.184 } 00:24:37.184 ] 00:24:37.184 }' 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.184 14:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.184 14:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.184 14:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.184 14:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.578 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:38.578 "name": "raid_bdev1", 00:24:38.578 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:38.578 "strip_size_kb": 64, 00:24:38.578 "state": "online", 00:24:38.578 "raid_level": "raid5f", 00:24:38.578 "superblock": true, 00:24:38.578 "num_base_bdevs": 4, 00:24:38.578 "num_base_bdevs_discovered": 4, 00:24:38.578 "num_base_bdevs_operational": 4, 00:24:38.578 "process": { 00:24:38.578 "type": "rebuild", 00:24:38.578 "target": "spare", 00:24:38.578 "progress": { 00:24:38.578 "blocks": 153600, 00:24:38.578 "percent": 80 00:24:38.578 } 00:24:38.578 }, 00:24:38.578 "base_bdevs_list": [ 00:24:38.578 { 00:24:38.578 "name": "spare", 00:24:38.578 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 }, 00:24:38.578 { 00:24:38.578 "name": "BaseBdev2", 00:24:38.578 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 }, 00:24:38.578 { 00:24:38.578 "name": "BaseBdev3", 00:24:38.578 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 }, 00:24:38.578 { 00:24:38.578 "name": "BaseBdev4", 00:24:38.578 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 } 00:24:38.578 ] 00:24:38.578 }' 00:24:38.579 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:38.579 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.579 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:38.579 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.579 14:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:39.514 "name": "raid_bdev1", 00:24:39.514 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:39.514 "strip_size_kb": 64, 00:24:39.514 "state": "online", 00:24:39.514 "raid_level": "raid5f", 00:24:39.514 "superblock": true, 00:24:39.514 "num_base_bdevs": 4, 00:24:39.514 "num_base_bdevs_discovered": 4, 00:24:39.514 "num_base_bdevs_operational": 4, 00:24:39.514 "process": { 00:24:39.514 "type": "rebuild", 00:24:39.514 "target": "spare", 00:24:39.514 "progress": { 00:24:39.514 "blocks": 176640, 00:24:39.514 "percent": 92 00:24:39.514 } 00:24:39.514 }, 00:24:39.514 "base_bdevs_list": [ 00:24:39.514 { 00:24:39.514 "name": "spare", 00:24:39.514 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:39.514 "is_configured": true, 00:24:39.514 "data_offset": 2048, 00:24:39.514 "data_size": 63488 00:24:39.514 }, 00:24:39.514 { 00:24:39.514 "name": "BaseBdev2", 00:24:39.514 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:39.514 "is_configured": true, 00:24:39.514 "data_offset": 2048, 00:24:39.514 "data_size": 63488 00:24:39.514 }, 00:24:39.514 { 00:24:39.514 "name": "BaseBdev3", 00:24:39.514 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:39.514 "is_configured": true, 00:24:39.514 "data_offset": 2048, 00:24:39.514 "data_size": 63488 00:24:39.514 }, 00:24:39.514 { 00:24:39.514 "name": "BaseBdev4", 00:24:39.514 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:39.514 "is_configured": true, 00:24:39.514 "data_offset": 2048, 00:24:39.514 "data_size": 63488 00:24:39.514 } 00:24:39.514 ] 00:24:39.514 }' 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.514 14:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:40.449 [2024-11-04 14:57:10.026262] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:40.449 [2024-11-04 14:57:10.026697] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:40.449 [2024-11-04 14:57:10.026927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.707 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.707 "name": "raid_bdev1", 00:24:40.708 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:40.708 "strip_size_kb": 64, 00:24:40.708 "state": "online", 00:24:40.708 "raid_level": "raid5f", 00:24:40.708 "superblock": true, 00:24:40.708 "num_base_bdevs": 4, 00:24:40.708 "num_base_bdevs_discovered": 4, 00:24:40.708 "num_base_bdevs_operational": 4, 00:24:40.708 "base_bdevs_list": [ 00:24:40.708 { 00:24:40.708 "name": "spare", 00:24:40.708 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:40.708 "is_configured": true, 00:24:40.708 "data_offset": 2048, 00:24:40.708 "data_size": 63488 00:24:40.708 }, 00:24:40.708 { 00:24:40.708 "name": "BaseBdev2", 00:24:40.708 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:40.708 "is_configured": true, 00:24:40.708 "data_offset": 2048, 00:24:40.708 "data_size": 63488 00:24:40.708 }, 00:24:40.708 { 00:24:40.708 "name": "BaseBdev3", 00:24:40.708 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:40.708 "is_configured": true, 00:24:40.708 "data_offset": 2048, 00:24:40.708 "data_size": 63488 00:24:40.708 }, 00:24:40.708 { 00:24:40.708 "name": "BaseBdev4", 00:24:40.708 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:40.708 "is_configured": true, 00:24:40.708 "data_offset": 2048, 00:24:40.708 "data_size": 63488 00:24:40.708 } 00:24:40.708 ] 00:24:40.708 }' 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.708 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.966 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.966 "name": "raid_bdev1", 00:24:40.967 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:40.967 "strip_size_kb": 64, 00:24:40.967 "state": "online", 00:24:40.967 "raid_level": "raid5f", 00:24:40.967 "superblock": true, 00:24:40.967 "num_base_bdevs": 4, 00:24:40.967 "num_base_bdevs_discovered": 4, 00:24:40.967 "num_base_bdevs_operational": 4, 00:24:40.967 "base_bdevs_list": [ 00:24:40.967 { 00:24:40.967 "name": "spare", 00:24:40.967 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 }, 00:24:40.967 { 00:24:40.967 "name": "BaseBdev2", 00:24:40.967 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 }, 00:24:40.967 { 00:24:40.967 "name": "BaseBdev3", 00:24:40.967 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 }, 00:24:40.967 { 00:24:40.967 "name": "BaseBdev4", 00:24:40.967 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 } 00:24:40.967 ] 00:24:40.967 }' 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.967 "name": "raid_bdev1", 00:24:40.967 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:40.967 "strip_size_kb": 64, 00:24:40.967 "state": "online", 00:24:40.967 "raid_level": "raid5f", 00:24:40.967 "superblock": true, 00:24:40.967 "num_base_bdevs": 4, 00:24:40.967 "num_base_bdevs_discovered": 4, 00:24:40.967 "num_base_bdevs_operational": 4, 00:24:40.967 "base_bdevs_list": [ 00:24:40.967 { 00:24:40.967 "name": "spare", 00:24:40.967 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 }, 00:24:40.967 { 00:24:40.967 "name": "BaseBdev2", 00:24:40.967 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 }, 00:24:40.967 { 00:24:40.967 "name": "BaseBdev3", 00:24:40.967 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 }, 00:24:40.967 { 00:24:40.967 "name": "BaseBdev4", 00:24:40.967 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:40.967 "is_configured": true, 00:24:40.967 "data_offset": 2048, 00:24:40.967 "data_size": 63488 00:24:40.967 } 00:24:40.967 ] 00:24:40.967 }' 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.967 14:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.534 [2024-11-04 14:57:11.262671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.534 [2024-11-04 14:57:11.262715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.534 [2024-11-04 14:57:11.262856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.534 [2024-11-04 14:57:11.262984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.534 [2024-11-04 14:57:11.263011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:41.534 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:41.792 /dev/nbd0 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.792 1+0 records in 00:24:41.792 1+0 records out 00:24:41.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312079 s, 13.1 MB/s 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:41.792 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:42.051 /dev/nbd1 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.051 1+0 records in 00:24:42.051 1+0 records out 00:24:42.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292057 s, 14.0 MB/s 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.051 14:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.313 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.577 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.144 [2024-11-04 14:57:12.828347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:43.144 [2024-11-04 14:57:12.828549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.144 [2024-11-04 14:57:12.828597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:43.144 [2024-11-04 14:57:12.828628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.144 [2024-11-04 14:57:12.832080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.144 [2024-11-04 14:57:12.832124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:43.144 [2024-11-04 14:57:12.832310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:43.144 [2024-11-04 14:57:12.832384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.144 [2024-11-04 14:57:12.832613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:43.144 spare 00:24:43.144 [2024-11-04 14:57:12.832877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.144 [2024-11-04 14:57:12.833300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.144 [2024-11-04 14:57:12.933452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:43.144 [2024-11-04 14:57:12.933533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:43.144 [2024-11-04 14:57:12.934106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:24:43.144 [2024-11-04 14:57:12.941416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:43.144 [2024-11-04 14:57:12.941592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:43.144 [2024-11-04 14:57:12.942043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.144 14:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.144 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.144 "name": "raid_bdev1", 00:24:43.144 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:43.144 "strip_size_kb": 64, 00:24:43.144 "state": "online", 00:24:43.144 "raid_level": "raid5f", 00:24:43.144 "superblock": true, 00:24:43.144 "num_base_bdevs": 4, 00:24:43.144 "num_base_bdevs_discovered": 4, 00:24:43.144 "num_base_bdevs_operational": 4, 00:24:43.144 "base_bdevs_list": [ 00:24:43.144 { 00:24:43.144 "name": "spare", 00:24:43.144 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:43.144 "is_configured": true, 00:24:43.144 "data_offset": 2048, 00:24:43.144 "data_size": 63488 00:24:43.144 }, 00:24:43.144 { 00:24:43.144 "name": "BaseBdev2", 00:24:43.144 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:43.144 "is_configured": true, 00:24:43.144 "data_offset": 2048, 00:24:43.144 "data_size": 63488 00:24:43.144 }, 00:24:43.144 { 00:24:43.144 "name": "BaseBdev3", 00:24:43.144 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:43.144 "is_configured": true, 00:24:43.144 "data_offset": 2048, 00:24:43.144 "data_size": 63488 00:24:43.144 }, 00:24:43.144 { 00:24:43.144 "name": "BaseBdev4", 00:24:43.144 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:43.144 "is_configured": true, 00:24:43.144 "data_offset": 2048, 00:24:43.144 "data_size": 63488 00:24:43.144 } 00:24:43.144 ] 00:24:43.144 }' 00:24:43.144 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.144 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.711 "name": "raid_bdev1", 00:24:43.711 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:43.711 "strip_size_kb": 64, 00:24:43.711 "state": "online", 00:24:43.711 "raid_level": "raid5f", 00:24:43.711 "superblock": true, 00:24:43.711 "num_base_bdevs": 4, 00:24:43.711 "num_base_bdevs_discovered": 4, 00:24:43.711 "num_base_bdevs_operational": 4, 00:24:43.711 "base_bdevs_list": [ 00:24:43.711 { 00:24:43.711 "name": "spare", 00:24:43.711 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:43.711 "is_configured": true, 00:24:43.711 "data_offset": 2048, 00:24:43.711 "data_size": 63488 00:24:43.711 }, 00:24:43.711 { 00:24:43.711 "name": "BaseBdev2", 00:24:43.711 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:43.711 "is_configured": true, 00:24:43.711 "data_offset": 2048, 00:24:43.711 "data_size": 63488 00:24:43.711 }, 00:24:43.711 { 00:24:43.711 "name": "BaseBdev3", 00:24:43.711 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:43.711 "is_configured": true, 00:24:43.711 "data_offset": 2048, 00:24:43.711 "data_size": 63488 00:24:43.711 }, 00:24:43.711 { 00:24:43.711 "name": "BaseBdev4", 00:24:43.711 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:43.711 "is_configured": true, 00:24:43.711 "data_offset": 2048, 00:24:43.711 "data_size": 63488 00:24:43.711 } 00:24:43.711 ] 00:24:43.711 }' 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:43.711 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.969 [2024-11-04 14:57:13.694860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.969 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.970 "name": "raid_bdev1", 00:24:43.970 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:43.970 "strip_size_kb": 64, 00:24:43.970 "state": "online", 00:24:43.970 "raid_level": "raid5f", 00:24:43.970 "superblock": true, 00:24:43.970 "num_base_bdevs": 4, 00:24:43.970 "num_base_bdevs_discovered": 3, 00:24:43.970 "num_base_bdevs_operational": 3, 00:24:43.970 "base_bdevs_list": [ 00:24:43.970 { 00:24:43.970 "name": null, 00:24:43.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.970 "is_configured": false, 00:24:43.970 "data_offset": 0, 00:24:43.970 "data_size": 63488 00:24:43.970 }, 00:24:43.970 { 00:24:43.970 "name": "BaseBdev2", 00:24:43.970 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:43.970 "is_configured": true, 00:24:43.970 "data_offset": 2048, 00:24:43.970 "data_size": 63488 00:24:43.970 }, 00:24:43.970 { 00:24:43.970 "name": "BaseBdev3", 00:24:43.970 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:43.970 "is_configured": true, 00:24:43.970 "data_offset": 2048, 00:24:43.970 "data_size": 63488 00:24:43.970 }, 00:24:43.970 { 00:24:43.970 "name": "BaseBdev4", 00:24:43.970 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:43.970 "is_configured": true, 00:24:43.970 "data_offset": 2048, 00:24:43.970 "data_size": 63488 00:24:43.970 } 00:24:43.970 ] 00:24:43.970 }' 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.970 14:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:44.535 14:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:44.535 14:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.535 14:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:44.535 [2024-11-04 14:57:14.211023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:44.536 [2024-11-04 14:57:14.211442] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:44.536 [2024-11-04 14:57:14.211477] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:44.536 [2024-11-04 14:57:14.211563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:44.536 [2024-11-04 14:57:14.224559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:24:44.536 14:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.536 14:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:44.536 [2024-11-04 14:57:14.232521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.470 "name": "raid_bdev1", 00:24:45.470 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:45.470 "strip_size_kb": 64, 00:24:45.470 "state": "online", 00:24:45.470 "raid_level": "raid5f", 00:24:45.470 "superblock": true, 00:24:45.470 "num_base_bdevs": 4, 00:24:45.470 "num_base_bdevs_discovered": 4, 00:24:45.470 "num_base_bdevs_operational": 4, 00:24:45.470 "process": { 00:24:45.470 "type": "rebuild", 00:24:45.470 "target": "spare", 00:24:45.470 "progress": { 00:24:45.470 "blocks": 17280, 00:24:45.470 "percent": 9 00:24:45.470 } 00:24:45.470 }, 00:24:45.470 "base_bdevs_list": [ 00:24:45.470 { 00:24:45.470 "name": "spare", 00:24:45.470 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:45.470 "is_configured": true, 00:24:45.470 "data_offset": 2048, 00:24:45.470 "data_size": 63488 00:24:45.470 }, 00:24:45.470 { 00:24:45.470 "name": "BaseBdev2", 00:24:45.470 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:45.470 "is_configured": true, 00:24:45.470 "data_offset": 2048, 00:24:45.470 "data_size": 63488 00:24:45.470 }, 00:24:45.470 { 00:24:45.470 "name": "BaseBdev3", 00:24:45.470 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:45.470 "is_configured": true, 00:24:45.470 "data_offset": 2048, 00:24:45.470 "data_size": 63488 00:24:45.470 }, 00:24:45.470 { 00:24:45.470 "name": "BaseBdev4", 00:24:45.470 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:45.470 "is_configured": true, 00:24:45.470 "data_offset": 2048, 00:24:45.470 "data_size": 63488 00:24:45.470 } 00:24:45.470 ] 00:24:45.470 }' 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.470 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.728 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.728 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:45.728 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.728 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.729 [2024-11-04 14:57:15.393661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:45.729 [2024-11-04 14:57:15.443695] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:45.729 [2024-11-04 14:57:15.443952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.729 [2024-11-04 14:57:15.443982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:45.729 [2024-11-04 14:57:15.443998] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.729 "name": "raid_bdev1", 00:24:45.729 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:45.729 "strip_size_kb": 64, 00:24:45.729 "state": "online", 00:24:45.729 "raid_level": "raid5f", 00:24:45.729 "superblock": true, 00:24:45.729 "num_base_bdevs": 4, 00:24:45.729 "num_base_bdevs_discovered": 3, 00:24:45.729 "num_base_bdevs_operational": 3, 00:24:45.729 "base_bdevs_list": [ 00:24:45.729 { 00:24:45.729 "name": null, 00:24:45.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.729 "is_configured": false, 00:24:45.729 "data_offset": 0, 00:24:45.729 "data_size": 63488 00:24:45.729 }, 00:24:45.729 { 00:24:45.729 "name": "BaseBdev2", 00:24:45.729 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:45.729 "is_configured": true, 00:24:45.729 "data_offset": 2048, 00:24:45.729 "data_size": 63488 00:24:45.729 }, 00:24:45.729 { 00:24:45.729 "name": "BaseBdev3", 00:24:45.729 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:45.729 "is_configured": true, 00:24:45.729 "data_offset": 2048, 00:24:45.729 "data_size": 63488 00:24:45.729 }, 00:24:45.729 { 00:24:45.729 "name": "BaseBdev4", 00:24:45.729 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:45.729 "is_configured": true, 00:24:45.729 "data_offset": 2048, 00:24:45.729 "data_size": 63488 00:24:45.729 } 00:24:45.729 ] 00:24:45.729 }' 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.729 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.296 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:46.296 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.296 14:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.296 [2024-11-04 14:57:15.995905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:46.296 [2024-11-04 14:57:15.996026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.296 [2024-11-04 14:57:15.996088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:46.296 [2024-11-04 14:57:15.996108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.296 [2024-11-04 14:57:15.996912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.296 [2024-11-04 14:57:15.996960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:46.296 [2024-11-04 14:57:15.997125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:46.296 [2024-11-04 14:57:15.997150] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:46.296 [2024-11-04 14:57:15.997165] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:46.296 [2024-11-04 14:57:15.997203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.296 spare 00:24:46.296 [2024-11-04 14:57:16.011410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:24:46.296 14:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.296 14:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:46.296 [2024-11-04 14:57:16.021462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:47.259 "name": "raid_bdev1", 00:24:47.259 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:47.259 "strip_size_kb": 64, 00:24:47.259 "state": "online", 00:24:47.259 "raid_level": "raid5f", 00:24:47.259 "superblock": true, 00:24:47.259 "num_base_bdevs": 4, 00:24:47.259 "num_base_bdevs_discovered": 4, 00:24:47.259 "num_base_bdevs_operational": 4, 00:24:47.259 "process": { 00:24:47.259 "type": "rebuild", 00:24:47.259 "target": "spare", 00:24:47.259 "progress": { 00:24:47.259 "blocks": 17280, 00:24:47.259 "percent": 9 00:24:47.259 } 00:24:47.259 }, 00:24:47.259 "base_bdevs_list": [ 00:24:47.259 { 00:24:47.259 "name": "spare", 00:24:47.259 "uuid": "aa416c65-f725-5796-ba28-f2dd6e021453", 00:24:47.259 "is_configured": true, 00:24:47.259 "data_offset": 2048, 00:24:47.259 "data_size": 63488 00:24:47.259 }, 00:24:47.259 { 00:24:47.259 "name": "BaseBdev2", 00:24:47.259 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:47.259 "is_configured": true, 00:24:47.259 "data_offset": 2048, 00:24:47.259 "data_size": 63488 00:24:47.259 }, 00:24:47.259 { 00:24:47.259 "name": "BaseBdev3", 00:24:47.259 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:47.259 "is_configured": true, 00:24:47.259 "data_offset": 2048, 00:24:47.259 "data_size": 63488 00:24:47.259 }, 00:24:47.259 { 00:24:47.259 "name": "BaseBdev4", 00:24:47.259 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:47.259 "is_configured": true, 00:24:47.259 "data_offset": 2048, 00:24:47.259 "data_size": 63488 00:24:47.259 } 00:24:47.259 ] 00:24:47.259 }' 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.259 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.518 [2024-11-04 14:57:17.182895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.518 [2024-11-04 14:57:17.232849] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:47.518 [2024-11-04 14:57:17.233102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.518 [2024-11-04 14:57:17.233138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.518 [2024-11-04 14:57:17.233163] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.518 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.518 "name": "raid_bdev1", 00:24:47.518 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:47.518 "strip_size_kb": 64, 00:24:47.518 "state": "online", 00:24:47.518 "raid_level": "raid5f", 00:24:47.518 "superblock": true, 00:24:47.518 "num_base_bdevs": 4, 00:24:47.518 "num_base_bdevs_discovered": 3, 00:24:47.518 "num_base_bdevs_operational": 3, 00:24:47.518 "base_bdevs_list": [ 00:24:47.518 { 00:24:47.518 "name": null, 00:24:47.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.518 "is_configured": false, 00:24:47.518 "data_offset": 0, 00:24:47.518 "data_size": 63488 00:24:47.518 }, 00:24:47.518 { 00:24:47.518 "name": "BaseBdev2", 00:24:47.518 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:47.519 "is_configured": true, 00:24:47.519 "data_offset": 2048, 00:24:47.519 "data_size": 63488 00:24:47.519 }, 00:24:47.519 { 00:24:47.519 "name": "BaseBdev3", 00:24:47.519 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:47.519 "is_configured": true, 00:24:47.519 "data_offset": 2048, 00:24:47.519 "data_size": 63488 00:24:47.519 }, 00:24:47.519 { 00:24:47.519 "name": "BaseBdev4", 00:24:47.519 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:47.519 "is_configured": true, 00:24:47.519 "data_offset": 2048, 00:24:47.519 "data_size": 63488 00:24:47.519 } 00:24:47.519 ] 00:24:47.519 }' 00:24:47.519 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.519 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.084 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.084 "name": "raid_bdev1", 00:24:48.084 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:48.084 "strip_size_kb": 64, 00:24:48.084 "state": "online", 00:24:48.084 "raid_level": "raid5f", 00:24:48.084 "superblock": true, 00:24:48.084 "num_base_bdevs": 4, 00:24:48.084 "num_base_bdevs_discovered": 3, 00:24:48.084 "num_base_bdevs_operational": 3, 00:24:48.084 "base_bdevs_list": [ 00:24:48.084 { 00:24:48.084 "name": null, 00:24:48.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.084 "is_configured": false, 00:24:48.084 "data_offset": 0, 00:24:48.084 "data_size": 63488 00:24:48.084 }, 00:24:48.084 { 00:24:48.084 "name": "BaseBdev2", 00:24:48.084 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:48.084 "is_configured": true, 00:24:48.084 "data_offset": 2048, 00:24:48.084 "data_size": 63488 00:24:48.084 }, 00:24:48.084 { 00:24:48.084 "name": "BaseBdev3", 00:24:48.084 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:48.084 "is_configured": true, 00:24:48.084 "data_offset": 2048, 00:24:48.084 "data_size": 63488 00:24:48.084 }, 00:24:48.085 { 00:24:48.085 "name": "BaseBdev4", 00:24:48.085 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:48.085 "is_configured": true, 00:24:48.085 "data_offset": 2048, 00:24:48.085 "data_size": 63488 00:24:48.085 } 00:24:48.085 ] 00:24:48.085 }' 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.085 [2024-11-04 14:57:17.953440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:48.085 [2024-11-04 14:57:17.953531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.085 [2024-11-04 14:57:17.953567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:48.085 [2024-11-04 14:57:17.953610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.085 [2024-11-04 14:57:17.954383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.085 [2024-11-04 14:57:17.954444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:48.085 [2024-11-04 14:57:17.954573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:48.085 [2024-11-04 14:57:17.954632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:48.085 [2024-11-04 14:57:17.954648] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:48.085 [2024-11-04 14:57:17.954662] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:48.085 BaseBdev1 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.085 14:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.460 14:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.460 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.460 "name": "raid_bdev1", 00:24:49.460 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:49.460 "strip_size_kb": 64, 00:24:49.460 "state": "online", 00:24:49.460 "raid_level": "raid5f", 00:24:49.460 "superblock": true, 00:24:49.460 "num_base_bdevs": 4, 00:24:49.460 "num_base_bdevs_discovered": 3, 00:24:49.460 "num_base_bdevs_operational": 3, 00:24:49.460 "base_bdevs_list": [ 00:24:49.460 { 00:24:49.460 "name": null, 00:24:49.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.460 "is_configured": false, 00:24:49.460 "data_offset": 0, 00:24:49.460 "data_size": 63488 00:24:49.460 }, 00:24:49.460 { 00:24:49.460 "name": "BaseBdev2", 00:24:49.460 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:49.460 "is_configured": true, 00:24:49.460 "data_offset": 2048, 00:24:49.460 "data_size": 63488 00:24:49.460 }, 00:24:49.460 { 00:24:49.460 "name": "BaseBdev3", 00:24:49.460 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:49.460 "is_configured": true, 00:24:49.460 "data_offset": 2048, 00:24:49.460 "data_size": 63488 00:24:49.460 }, 00:24:49.460 { 00:24:49.460 "name": "BaseBdev4", 00:24:49.460 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:49.460 "is_configured": true, 00:24:49.460 "data_offset": 2048, 00:24:49.460 "data_size": 63488 00:24:49.460 } 00:24:49.460 ] 00:24:49.460 }' 00:24:49.460 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.460 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:49.719 "name": "raid_bdev1", 00:24:49.719 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:49.719 "strip_size_kb": 64, 00:24:49.719 "state": "online", 00:24:49.719 "raid_level": "raid5f", 00:24:49.719 "superblock": true, 00:24:49.719 "num_base_bdevs": 4, 00:24:49.719 "num_base_bdevs_discovered": 3, 00:24:49.719 "num_base_bdevs_operational": 3, 00:24:49.719 "base_bdevs_list": [ 00:24:49.719 { 00:24:49.719 "name": null, 00:24:49.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.719 "is_configured": false, 00:24:49.719 "data_offset": 0, 00:24:49.719 "data_size": 63488 00:24:49.719 }, 00:24:49.719 { 00:24:49.719 "name": "BaseBdev2", 00:24:49.719 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:49.719 "is_configured": true, 00:24:49.719 "data_offset": 2048, 00:24:49.719 "data_size": 63488 00:24:49.719 }, 00:24:49.719 { 00:24:49.719 "name": "BaseBdev3", 00:24:49.719 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:49.719 "is_configured": true, 00:24:49.719 "data_offset": 2048, 00:24:49.719 "data_size": 63488 00:24:49.719 }, 00:24:49.719 { 00:24:49.719 "name": "BaseBdev4", 00:24:49.719 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:49.719 "is_configured": true, 00:24:49.719 "data_offset": 2048, 00:24:49.719 "data_size": 63488 00:24:49.719 } 00:24:49.719 ] 00:24:49.719 }' 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:49.719 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.978 [2024-11-04 14:57:19.646058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.978 [2024-11-04 14:57:19.646551] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:49.978 [2024-11-04 14:57:19.646586] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:49.978 request: 00:24:49.978 { 00:24:49.978 "base_bdev": "BaseBdev1", 00:24:49.978 "raid_bdev": "raid_bdev1", 00:24:49.978 "method": "bdev_raid_add_base_bdev", 00:24:49.978 "req_id": 1 00:24:49.978 } 00:24:49.978 Got JSON-RPC error response 00:24:49.978 response: 00:24:49.978 { 00:24:49.978 "code": -22, 00:24:49.978 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:49.978 } 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:49.978 14:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.912 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.912 "name": "raid_bdev1", 00:24:50.912 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:50.912 "strip_size_kb": 64, 00:24:50.912 "state": "online", 00:24:50.912 "raid_level": "raid5f", 00:24:50.912 "superblock": true, 00:24:50.912 "num_base_bdevs": 4, 00:24:50.912 "num_base_bdevs_discovered": 3, 00:24:50.912 "num_base_bdevs_operational": 3, 00:24:50.912 "base_bdevs_list": [ 00:24:50.912 { 00:24:50.912 "name": null, 00:24:50.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.912 "is_configured": false, 00:24:50.912 "data_offset": 0, 00:24:50.912 "data_size": 63488 00:24:50.912 }, 00:24:50.912 { 00:24:50.912 "name": "BaseBdev2", 00:24:50.912 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:50.912 "is_configured": true, 00:24:50.912 "data_offset": 2048, 00:24:50.912 "data_size": 63488 00:24:50.912 }, 00:24:50.912 { 00:24:50.912 "name": "BaseBdev3", 00:24:50.912 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:50.913 "is_configured": true, 00:24:50.913 "data_offset": 2048, 00:24:50.913 "data_size": 63488 00:24:50.913 }, 00:24:50.913 { 00:24:50.913 "name": "BaseBdev4", 00:24:50.913 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:50.913 "is_configured": true, 00:24:50.913 "data_offset": 2048, 00:24:50.913 "data_size": 63488 00:24:50.913 } 00:24:50.913 ] 00:24:50.913 }' 00:24:50.913 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.913 14:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.479 "name": "raid_bdev1", 00:24:51.479 "uuid": "ade52981-f0c3-4240-9927-3a1c3dae0da3", 00:24:51.479 "strip_size_kb": 64, 00:24:51.479 "state": "online", 00:24:51.479 "raid_level": "raid5f", 00:24:51.479 "superblock": true, 00:24:51.479 "num_base_bdevs": 4, 00:24:51.479 "num_base_bdevs_discovered": 3, 00:24:51.479 "num_base_bdevs_operational": 3, 00:24:51.479 "base_bdevs_list": [ 00:24:51.479 { 00:24:51.479 "name": null, 00:24:51.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.479 "is_configured": false, 00:24:51.479 "data_offset": 0, 00:24:51.479 "data_size": 63488 00:24:51.479 }, 00:24:51.479 { 00:24:51.479 "name": "BaseBdev2", 00:24:51.479 "uuid": "528091f6-461b-515e-b174-773fcd5b7456", 00:24:51.479 "is_configured": true, 00:24:51.479 "data_offset": 2048, 00:24:51.479 "data_size": 63488 00:24:51.479 }, 00:24:51.479 { 00:24:51.479 "name": "BaseBdev3", 00:24:51.479 "uuid": "7350c037-d7cb-55ae-8124-2f0a454fdeab", 00:24:51.479 "is_configured": true, 00:24:51.479 "data_offset": 2048, 00:24:51.479 "data_size": 63488 00:24:51.479 }, 00:24:51.479 { 00:24:51.479 "name": "BaseBdev4", 00:24:51.479 "uuid": "600ffb06-9c96-5aad-b5af-bf6b8a39f8ef", 00:24:51.479 "is_configured": true, 00:24:51.479 "data_offset": 2048, 00:24:51.479 "data_size": 63488 00:24:51.479 } 00:24:51.479 ] 00:24:51.479 }' 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85673 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85673 ']' 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85673 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:51.479 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85673 00:24:51.479 killing process with pid 85673 00:24:51.479 Received shutdown signal, test time was about 60.000000 seconds 00:24:51.479 00:24:51.479 Latency(us) 00:24:51.479 [2024-11-04T14:57:21.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.479 [2024-11-04T14:57:21.371Z] =================================================================================================================== 00:24:51.479 [2024-11-04T14:57:21.372Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:51.480 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:51.480 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:51.480 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85673' 00:24:51.480 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85673 00:24:51.480 [2024-11-04 14:57:21.358884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.480 14:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85673 00:24:51.480 [2024-11-04 14:57:21.359074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:51.480 [2024-11-04 14:57:21.359175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:51.480 [2024-11-04 14:57:21.359225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:52.046 [2024-11-04 14:57:21.734056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.982 14:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:52.982 00:24:52.982 real 0m28.640s 00:24:52.982 user 0m37.232s 00:24:52.982 sys 0m2.998s 00:24:52.982 14:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:52.982 ************************************ 00:24:52.982 END TEST raid5f_rebuild_test_sb 00:24:52.982 ************************************ 00:24:52.982 14:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.982 14:57:22 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:24:52.982 14:57:22 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:52.982 14:57:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:52.982 14:57:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:52.982 14:57:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:52.982 ************************************ 00:24:52.982 START TEST raid_state_function_test_sb_4k 00:24:52.982 ************************************ 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.982 Process raid pid: 86497 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86497 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86497' 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86497 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86497 ']' 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:52.982 14:57:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.240 [2024-11-04 14:57:22.925523] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:24:53.240 [2024-11-04 14:57:22.926013] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.240 [2024-11-04 14:57:23.121926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.499 [2024-11-04 14:57:23.258791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.757 [2024-11-04 14:57:23.479914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.757 [2024-11-04 14:57:23.480140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.016 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:54.016 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:24:54.016 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:54.016 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.016 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.016 [2024-11-04 14:57:23.901451] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:54.016 [2024-11-04 14:57:23.901745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:54.016 [2024-11-04 14:57:23.901915] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.016 [2024-11-04 14:57:23.901949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.275 "name": "Existed_Raid", 00:24:54.275 "uuid": "29521556-ee8e-47e2-bb56-a405aa755371", 00:24:54.275 "strip_size_kb": 0, 00:24:54.275 "state": "configuring", 00:24:54.275 "raid_level": "raid1", 00:24:54.275 "superblock": true, 00:24:54.275 "num_base_bdevs": 2, 00:24:54.275 "num_base_bdevs_discovered": 0, 00:24:54.275 "num_base_bdevs_operational": 2, 00:24:54.275 "base_bdevs_list": [ 00:24:54.275 { 00:24:54.275 "name": "BaseBdev1", 00:24:54.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.275 "is_configured": false, 00:24:54.275 "data_offset": 0, 00:24:54.275 "data_size": 0 00:24:54.275 }, 00:24:54.275 { 00:24:54.275 "name": "BaseBdev2", 00:24:54.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.275 "is_configured": false, 00:24:54.275 "data_offset": 0, 00:24:54.275 "data_size": 0 00:24:54.275 } 00:24:54.275 ] 00:24:54.275 }' 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.275 14:57:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.533 [2024-11-04 14:57:24.409650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:54.533 [2024-11-04 14:57:24.409697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.533 [2024-11-04 14:57:24.417598] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:54.533 [2024-11-04 14:57:24.417797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:54.533 [2024-11-04 14:57:24.417935] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.533 [2024-11-04 14:57:24.418001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.533 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.792 [2024-11-04 14:57:24.467104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.792 BaseBdev1 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.792 [ 00:24:54.792 { 00:24:54.792 "name": "BaseBdev1", 00:24:54.792 "aliases": [ 00:24:54.792 "bdb88b67-5558-4bdc-8487-944651e57c86" 00:24:54.792 ], 00:24:54.792 "product_name": "Malloc disk", 00:24:54.792 "block_size": 4096, 00:24:54.792 "num_blocks": 8192, 00:24:54.792 "uuid": "bdb88b67-5558-4bdc-8487-944651e57c86", 00:24:54.792 "assigned_rate_limits": { 00:24:54.792 "rw_ios_per_sec": 0, 00:24:54.792 "rw_mbytes_per_sec": 0, 00:24:54.792 "r_mbytes_per_sec": 0, 00:24:54.792 "w_mbytes_per_sec": 0 00:24:54.792 }, 00:24:54.792 "claimed": true, 00:24:54.792 "claim_type": "exclusive_write", 00:24:54.792 "zoned": false, 00:24:54.792 "supported_io_types": { 00:24:54.792 "read": true, 00:24:54.792 "write": true, 00:24:54.792 "unmap": true, 00:24:54.792 "flush": true, 00:24:54.792 "reset": true, 00:24:54.792 "nvme_admin": false, 00:24:54.792 "nvme_io": false, 00:24:54.792 "nvme_io_md": false, 00:24:54.792 "write_zeroes": true, 00:24:54.792 "zcopy": true, 00:24:54.792 "get_zone_info": false, 00:24:54.792 "zone_management": false, 00:24:54.792 "zone_append": false, 00:24:54.792 "compare": false, 00:24:54.792 "compare_and_write": false, 00:24:54.792 "abort": true, 00:24:54.792 "seek_hole": false, 00:24:54.792 "seek_data": false, 00:24:54.792 "copy": true, 00:24:54.792 "nvme_iov_md": false 00:24:54.792 }, 00:24:54.792 "memory_domains": [ 00:24:54.792 { 00:24:54.792 "dma_device_id": "system", 00:24:54.792 "dma_device_type": 1 00:24:54.792 }, 00:24:54.792 { 00:24:54.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.792 "dma_device_type": 2 00:24:54.792 } 00:24:54.792 ], 00:24:54.792 "driver_specific": {} 00:24:54.792 } 00:24:54.792 ] 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.792 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.792 "name": "Existed_Raid", 00:24:54.792 "uuid": "cefdf6fa-093f-4dea-9695-e67a63769abd", 00:24:54.792 "strip_size_kb": 0, 00:24:54.792 "state": "configuring", 00:24:54.792 "raid_level": "raid1", 00:24:54.792 "superblock": true, 00:24:54.792 "num_base_bdevs": 2, 00:24:54.793 "num_base_bdevs_discovered": 1, 00:24:54.793 "num_base_bdevs_operational": 2, 00:24:54.793 "base_bdevs_list": [ 00:24:54.793 { 00:24:54.793 "name": "BaseBdev1", 00:24:54.793 "uuid": "bdb88b67-5558-4bdc-8487-944651e57c86", 00:24:54.793 "is_configured": true, 00:24:54.793 "data_offset": 256, 00:24:54.793 "data_size": 7936 00:24:54.793 }, 00:24:54.793 { 00:24:54.793 "name": "BaseBdev2", 00:24:54.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.793 "is_configured": false, 00:24:54.793 "data_offset": 0, 00:24:54.793 "data_size": 0 00:24:54.793 } 00:24:54.793 ] 00:24:54.793 }' 00:24:54.793 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.793 14:57:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.359 [2024-11-04 14:57:25.055392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:55.359 [2024-11-04 14:57:25.055465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.359 [2024-11-04 14:57:25.063424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:55.359 [2024-11-04 14:57:25.066155] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:55.359 [2024-11-04 14:57:25.066225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.359 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.360 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.360 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.360 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.360 "name": "Existed_Raid", 00:24:55.360 "uuid": "9c3dc240-e4ee-4475-b075-6432c04d4905", 00:24:55.360 "strip_size_kb": 0, 00:24:55.360 "state": "configuring", 00:24:55.360 "raid_level": "raid1", 00:24:55.360 "superblock": true, 00:24:55.360 "num_base_bdevs": 2, 00:24:55.360 "num_base_bdevs_discovered": 1, 00:24:55.360 "num_base_bdevs_operational": 2, 00:24:55.360 "base_bdevs_list": [ 00:24:55.360 { 00:24:55.360 "name": "BaseBdev1", 00:24:55.360 "uuid": "bdb88b67-5558-4bdc-8487-944651e57c86", 00:24:55.360 "is_configured": true, 00:24:55.360 "data_offset": 256, 00:24:55.360 "data_size": 7936 00:24:55.360 }, 00:24:55.360 { 00:24:55.360 "name": "BaseBdev2", 00:24:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.360 "is_configured": false, 00:24:55.360 "data_offset": 0, 00:24:55.360 "data_size": 0 00:24:55.360 } 00:24:55.360 ] 00:24:55.360 }' 00:24:55.360 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.360 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.936 [2024-11-04 14:57:25.627117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.936 [2024-11-04 14:57:25.627498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:55.936 [2024-11-04 14:57:25.627517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:55.936 BaseBdev2 00:24:55.936 [2024-11-04 14:57:25.627908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:55.936 [2024-11-04 14:57:25.628119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:55.936 [2024-11-04 14:57:25.628140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:55.936 [2024-11-04 14:57:25.628361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.936 [ 00:24:55.936 { 00:24:55.936 "name": "BaseBdev2", 00:24:55.936 "aliases": [ 00:24:55.936 "b3dec6f0-68ca-4452-b063-b9b0ef021223" 00:24:55.936 ], 00:24:55.936 "product_name": "Malloc disk", 00:24:55.936 "block_size": 4096, 00:24:55.936 "num_blocks": 8192, 00:24:55.936 "uuid": "b3dec6f0-68ca-4452-b063-b9b0ef021223", 00:24:55.936 "assigned_rate_limits": { 00:24:55.936 "rw_ios_per_sec": 0, 00:24:55.936 "rw_mbytes_per_sec": 0, 00:24:55.936 "r_mbytes_per_sec": 0, 00:24:55.936 "w_mbytes_per_sec": 0 00:24:55.936 }, 00:24:55.936 "claimed": true, 00:24:55.936 "claim_type": "exclusive_write", 00:24:55.936 "zoned": false, 00:24:55.936 "supported_io_types": { 00:24:55.936 "read": true, 00:24:55.936 "write": true, 00:24:55.936 "unmap": true, 00:24:55.936 "flush": true, 00:24:55.936 "reset": true, 00:24:55.936 "nvme_admin": false, 00:24:55.936 "nvme_io": false, 00:24:55.936 "nvme_io_md": false, 00:24:55.936 "write_zeroes": true, 00:24:55.936 "zcopy": true, 00:24:55.936 "get_zone_info": false, 00:24:55.936 "zone_management": false, 00:24:55.936 "zone_append": false, 00:24:55.936 "compare": false, 00:24:55.936 "compare_and_write": false, 00:24:55.936 "abort": true, 00:24:55.936 "seek_hole": false, 00:24:55.936 "seek_data": false, 00:24:55.936 "copy": true, 00:24:55.936 "nvme_iov_md": false 00:24:55.936 }, 00:24:55.936 "memory_domains": [ 00:24:55.936 { 00:24:55.936 "dma_device_id": "system", 00:24:55.936 "dma_device_type": 1 00:24:55.936 }, 00:24:55.936 { 00:24:55.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.936 "dma_device_type": 2 00:24:55.936 } 00:24:55.936 ], 00:24:55.936 "driver_specific": {} 00:24:55.936 } 00:24:55.936 ] 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:55.936 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.937 "name": "Existed_Raid", 00:24:55.937 "uuid": "9c3dc240-e4ee-4475-b075-6432c04d4905", 00:24:55.937 "strip_size_kb": 0, 00:24:55.937 "state": "online", 00:24:55.937 "raid_level": "raid1", 00:24:55.937 "superblock": true, 00:24:55.937 "num_base_bdevs": 2, 00:24:55.937 "num_base_bdevs_discovered": 2, 00:24:55.937 "num_base_bdevs_operational": 2, 00:24:55.937 "base_bdevs_list": [ 00:24:55.937 { 00:24:55.937 "name": "BaseBdev1", 00:24:55.937 "uuid": "bdb88b67-5558-4bdc-8487-944651e57c86", 00:24:55.937 "is_configured": true, 00:24:55.937 "data_offset": 256, 00:24:55.937 "data_size": 7936 00:24:55.937 }, 00:24:55.937 { 00:24:55.937 "name": "BaseBdev2", 00:24:55.937 "uuid": "b3dec6f0-68ca-4452-b063-b9b0ef021223", 00:24:55.937 "is_configured": true, 00:24:55.937 "data_offset": 256, 00:24:55.937 "data_size": 7936 00:24:55.937 } 00:24:55.937 ] 00:24:55.937 }' 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.937 14:57:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:56.517 [2024-11-04 14:57:26.191775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.517 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:56.517 "name": "Existed_Raid", 00:24:56.517 "aliases": [ 00:24:56.517 "9c3dc240-e4ee-4475-b075-6432c04d4905" 00:24:56.517 ], 00:24:56.517 "product_name": "Raid Volume", 00:24:56.517 "block_size": 4096, 00:24:56.517 "num_blocks": 7936, 00:24:56.517 "uuid": "9c3dc240-e4ee-4475-b075-6432c04d4905", 00:24:56.517 "assigned_rate_limits": { 00:24:56.517 "rw_ios_per_sec": 0, 00:24:56.517 "rw_mbytes_per_sec": 0, 00:24:56.517 "r_mbytes_per_sec": 0, 00:24:56.517 "w_mbytes_per_sec": 0 00:24:56.517 }, 00:24:56.517 "claimed": false, 00:24:56.517 "zoned": false, 00:24:56.517 "supported_io_types": { 00:24:56.517 "read": true, 00:24:56.517 "write": true, 00:24:56.517 "unmap": false, 00:24:56.517 "flush": false, 00:24:56.517 "reset": true, 00:24:56.517 "nvme_admin": false, 00:24:56.517 "nvme_io": false, 00:24:56.517 "nvme_io_md": false, 00:24:56.517 "write_zeroes": true, 00:24:56.517 "zcopy": false, 00:24:56.517 "get_zone_info": false, 00:24:56.517 "zone_management": false, 00:24:56.517 "zone_append": false, 00:24:56.517 "compare": false, 00:24:56.517 "compare_and_write": false, 00:24:56.517 "abort": false, 00:24:56.517 "seek_hole": false, 00:24:56.517 "seek_data": false, 00:24:56.517 "copy": false, 00:24:56.517 "nvme_iov_md": false 00:24:56.517 }, 00:24:56.517 "memory_domains": [ 00:24:56.517 { 00:24:56.517 "dma_device_id": "system", 00:24:56.517 "dma_device_type": 1 00:24:56.518 }, 00:24:56.518 { 00:24:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.518 "dma_device_type": 2 00:24:56.518 }, 00:24:56.518 { 00:24:56.518 "dma_device_id": "system", 00:24:56.518 "dma_device_type": 1 00:24:56.518 }, 00:24:56.518 { 00:24:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.518 "dma_device_type": 2 00:24:56.518 } 00:24:56.518 ], 00:24:56.518 "driver_specific": { 00:24:56.518 "raid": { 00:24:56.518 "uuid": "9c3dc240-e4ee-4475-b075-6432c04d4905", 00:24:56.518 "strip_size_kb": 0, 00:24:56.518 "state": "online", 00:24:56.518 "raid_level": "raid1", 00:24:56.518 "superblock": true, 00:24:56.518 "num_base_bdevs": 2, 00:24:56.518 "num_base_bdevs_discovered": 2, 00:24:56.518 "num_base_bdevs_operational": 2, 00:24:56.518 "base_bdevs_list": [ 00:24:56.518 { 00:24:56.518 "name": "BaseBdev1", 00:24:56.518 "uuid": "bdb88b67-5558-4bdc-8487-944651e57c86", 00:24:56.518 "is_configured": true, 00:24:56.518 "data_offset": 256, 00:24:56.518 "data_size": 7936 00:24:56.518 }, 00:24:56.518 { 00:24:56.518 "name": "BaseBdev2", 00:24:56.518 "uuid": "b3dec6f0-68ca-4452-b063-b9b0ef021223", 00:24:56.518 "is_configured": true, 00:24:56.518 "data_offset": 256, 00:24:56.518 "data_size": 7936 00:24:56.518 } 00:24:56.518 ] 00:24:56.518 } 00:24:56.518 } 00:24:56.518 }' 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:56.518 BaseBdev2' 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:56.518 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.776 [2024-11-04 14:57:26.475557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:56.776 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.777 "name": "Existed_Raid", 00:24:56.777 "uuid": "9c3dc240-e4ee-4475-b075-6432c04d4905", 00:24:56.777 "strip_size_kb": 0, 00:24:56.777 "state": "online", 00:24:56.777 "raid_level": "raid1", 00:24:56.777 "superblock": true, 00:24:56.777 "num_base_bdevs": 2, 00:24:56.777 "num_base_bdevs_discovered": 1, 00:24:56.777 "num_base_bdevs_operational": 1, 00:24:56.777 "base_bdevs_list": [ 00:24:56.777 { 00:24:56.777 "name": null, 00:24:56.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.777 "is_configured": false, 00:24:56.777 "data_offset": 0, 00:24:56.777 "data_size": 7936 00:24:56.777 }, 00:24:56.777 { 00:24:56.777 "name": "BaseBdev2", 00:24:56.777 "uuid": "b3dec6f0-68ca-4452-b063-b9b0ef021223", 00:24:56.777 "is_configured": true, 00:24:56.777 "data_offset": 256, 00:24:56.777 "data_size": 7936 00:24:56.777 } 00:24:56.777 ] 00:24:56.777 }' 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.777 14:57:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:57.342 [2024-11-04 14:57:27.147128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:57.342 [2024-11-04 14:57:27.147468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:57.342 [2024-11-04 14:57:27.223128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:57.342 [2024-11-04 14:57:27.223194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:57.342 [2024-11-04 14:57:27.223213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.342 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86497 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86497 ']' 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86497 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86497 00:24:57.600 killing process with pid 86497 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86497' 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86497 00:24:57.600 [2024-11-04 14:57:27.312188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:57.600 14:57:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86497 00:24:57.600 [2024-11-04 14:57:27.327505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:58.535 ************************************ 00:24:58.535 END TEST raid_state_function_test_sb_4k 00:24:58.535 ************************************ 00:24:58.535 14:57:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:24:58.535 00:24:58.535 real 0m5.521s 00:24:58.535 user 0m8.259s 00:24:58.535 sys 0m0.935s 00:24:58.535 14:57:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:58.535 14:57:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:58.535 14:57:28 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:58.535 14:57:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:58.535 14:57:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:58.535 14:57:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:58.535 ************************************ 00:24:58.535 START TEST raid_superblock_test_4k 00:24:58.535 ************************************ 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86755 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86755 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86755 ']' 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.535 14:57:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:58.793 [2024-11-04 14:57:28.503342] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:24:58.793 [2024-11-04 14:57:28.503801] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86755 ] 00:24:59.052 [2024-11-04 14:57:28.689558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.052 [2024-11-04 14:57:28.809322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.310 [2024-11-04 14:57:29.019018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:59.310 [2024-11-04 14:57:29.019092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.568 malloc1 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.568 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.827 [2024-11-04 14:57:29.465017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:59.827 [2024-11-04 14:57:29.465341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.827 [2024-11-04 14:57:29.465388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:59.827 [2024-11-04 14:57:29.465405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.827 [2024-11-04 14:57:29.468597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.827 [2024-11-04 14:57:29.468654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:59.827 pt1 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.827 malloc2 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.827 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.827 [2024-11-04 14:57:29.520607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:59.827 [2024-11-04 14:57:29.520829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.827 [2024-11-04 14:57:29.520873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:59.828 [2024-11-04 14:57:29.520888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.828 [2024-11-04 14:57:29.523894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.828 [2024-11-04 14:57:29.523936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:59.828 pt2 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.828 [2024-11-04 14:57:29.528800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:59.828 [2024-11-04 14:57:29.531533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:59.828 [2024-11-04 14:57:29.531954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:59.828 [2024-11-04 14:57:29.532090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:59.828 [2024-11-04 14:57:29.532468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:59.828 [2024-11-04 14:57:29.532820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:59.828 [2024-11-04 14:57:29.532952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:59.828 [2024-11-04 14:57:29.533214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.828 "name": "raid_bdev1", 00:24:59.828 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:24:59.828 "strip_size_kb": 0, 00:24:59.828 "state": "online", 00:24:59.828 "raid_level": "raid1", 00:24:59.828 "superblock": true, 00:24:59.828 "num_base_bdevs": 2, 00:24:59.828 "num_base_bdevs_discovered": 2, 00:24:59.828 "num_base_bdevs_operational": 2, 00:24:59.828 "base_bdevs_list": [ 00:24:59.828 { 00:24:59.828 "name": "pt1", 00:24:59.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:59.828 "is_configured": true, 00:24:59.828 "data_offset": 256, 00:24:59.828 "data_size": 7936 00:24:59.828 }, 00:24:59.828 { 00:24:59.828 "name": "pt2", 00:24:59.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:59.828 "is_configured": true, 00:24:59.828 "data_offset": 256, 00:24:59.828 "data_size": 7936 00:24:59.828 } 00:24:59.828 ] 00:24:59.828 }' 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.828 14:57:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.395 [2024-11-04 14:57:30.053684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.395 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.395 "name": "raid_bdev1", 00:25:00.395 "aliases": [ 00:25:00.395 "25100960-0ea7-469e-9bd5-66da864726f7" 00:25:00.395 ], 00:25:00.395 "product_name": "Raid Volume", 00:25:00.395 "block_size": 4096, 00:25:00.395 "num_blocks": 7936, 00:25:00.395 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:00.395 "assigned_rate_limits": { 00:25:00.395 "rw_ios_per_sec": 0, 00:25:00.395 "rw_mbytes_per_sec": 0, 00:25:00.395 "r_mbytes_per_sec": 0, 00:25:00.395 "w_mbytes_per_sec": 0 00:25:00.395 }, 00:25:00.395 "claimed": false, 00:25:00.395 "zoned": false, 00:25:00.395 "supported_io_types": { 00:25:00.395 "read": true, 00:25:00.395 "write": true, 00:25:00.395 "unmap": false, 00:25:00.395 "flush": false, 00:25:00.395 "reset": true, 00:25:00.395 "nvme_admin": false, 00:25:00.395 "nvme_io": false, 00:25:00.395 "nvme_io_md": false, 00:25:00.395 "write_zeroes": true, 00:25:00.395 "zcopy": false, 00:25:00.395 "get_zone_info": false, 00:25:00.395 "zone_management": false, 00:25:00.395 "zone_append": false, 00:25:00.395 "compare": false, 00:25:00.395 "compare_and_write": false, 00:25:00.395 "abort": false, 00:25:00.395 "seek_hole": false, 00:25:00.395 "seek_data": false, 00:25:00.395 "copy": false, 00:25:00.395 "nvme_iov_md": false 00:25:00.395 }, 00:25:00.395 "memory_domains": [ 00:25:00.396 { 00:25:00.396 "dma_device_id": "system", 00:25:00.396 "dma_device_type": 1 00:25:00.396 }, 00:25:00.396 { 00:25:00.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.396 "dma_device_type": 2 00:25:00.396 }, 00:25:00.396 { 00:25:00.396 "dma_device_id": "system", 00:25:00.396 "dma_device_type": 1 00:25:00.396 }, 00:25:00.396 { 00:25:00.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.396 "dma_device_type": 2 00:25:00.396 } 00:25:00.396 ], 00:25:00.396 "driver_specific": { 00:25:00.396 "raid": { 00:25:00.396 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:00.396 "strip_size_kb": 0, 00:25:00.396 "state": "online", 00:25:00.396 "raid_level": "raid1", 00:25:00.396 "superblock": true, 00:25:00.396 "num_base_bdevs": 2, 00:25:00.396 "num_base_bdevs_discovered": 2, 00:25:00.396 "num_base_bdevs_operational": 2, 00:25:00.396 "base_bdevs_list": [ 00:25:00.396 { 00:25:00.396 "name": "pt1", 00:25:00.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:00.396 "is_configured": true, 00:25:00.396 "data_offset": 256, 00:25:00.396 "data_size": 7936 00:25:00.396 }, 00:25:00.396 { 00:25:00.396 "name": "pt2", 00:25:00.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:00.396 "is_configured": true, 00:25:00.396 "data_offset": 256, 00:25:00.396 "data_size": 7936 00:25:00.396 } 00:25:00.396 ] 00:25:00.396 } 00:25:00.396 } 00:25:00.396 }' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:00.396 pt2' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:00.396 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 [2024-11-04 14:57:30.313656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=25100960-0ea7-469e-9bd5-66da864726f7 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 25100960-0ea7-469e-9bd5-66da864726f7 ']' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 [2024-11-04 14:57:30.361331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.655 [2024-11-04 14:57:30.361482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.655 [2024-11-04 14:57:30.361724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.655 [2024-11-04 14:57:30.361902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.655 [2024-11-04 14:57:30.362047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 [2024-11-04 14:57:30.497386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:00.655 [2024-11-04 14:57:30.500126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:00.655 [2024-11-04 14:57:30.500210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:00.655 [2024-11-04 14:57:30.500340] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:00.655 [2024-11-04 14:57:30.500366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.655 [2024-11-04 14:57:30.500380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:00.655 request: 00:25:00.655 { 00:25:00.655 "name": "raid_bdev1", 00:25:00.655 "raid_level": "raid1", 00:25:00.655 "base_bdevs": [ 00:25:00.655 "malloc1", 00:25:00.655 "malloc2" 00:25:00.655 ], 00:25:00.655 "superblock": false, 00:25:00.655 "method": "bdev_raid_create", 00:25:00.655 "req_id": 1 00:25:00.655 } 00:25:00.655 Got JSON-RPC error response 00:25:00.655 response: 00:25:00.655 { 00:25:00.655 "code": -17, 00:25:00.655 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:00.655 } 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.914 [2024-11-04 14:57:30.565392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:00.914 [2024-11-04 14:57:30.565626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.914 [2024-11-04 14:57:30.565691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:00.914 [2024-11-04 14:57:30.565807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.914 [2024-11-04 14:57:30.569063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.914 [2024-11-04 14:57:30.569274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:00.914 [2024-11-04 14:57:30.569475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:00.914 [2024-11-04 14:57:30.569672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:00.914 pt1 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:00.914 "name": "raid_bdev1", 00:25:00.914 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:00.914 "strip_size_kb": 0, 00:25:00.914 "state": "configuring", 00:25:00.914 "raid_level": "raid1", 00:25:00.914 "superblock": true, 00:25:00.914 "num_base_bdevs": 2, 00:25:00.914 "num_base_bdevs_discovered": 1, 00:25:00.914 "num_base_bdevs_operational": 2, 00:25:00.914 "base_bdevs_list": [ 00:25:00.914 { 00:25:00.914 "name": "pt1", 00:25:00.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:00.914 "is_configured": true, 00:25:00.914 "data_offset": 256, 00:25:00.914 "data_size": 7936 00:25:00.914 }, 00:25:00.914 { 00:25:00.914 "name": null, 00:25:00.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:00.914 "is_configured": false, 00:25:00.914 "data_offset": 256, 00:25:00.914 "data_size": 7936 00:25:00.914 } 00:25:00.914 ] 00:25:00.914 }' 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:00.914 14:57:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:01.526 [2024-11-04 14:57:31.097786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:01.526 [2024-11-04 14:57:31.098109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.526 [2024-11-04 14:57:31.098179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:01.526 [2024-11-04 14:57:31.098347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.526 [2024-11-04 14:57:31.099031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.526 [2024-11-04 14:57:31.099111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:01.526 [2024-11-04 14:57:31.099211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:01.526 [2024-11-04 14:57:31.099291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:01.526 [2024-11-04 14:57:31.099442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:01.526 [2024-11-04 14:57:31.099462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:01.526 [2024-11-04 14:57:31.099780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:01.526 [2024-11-04 14:57:31.099996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:01.526 [2024-11-04 14:57:31.100023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:01.526 [2024-11-04 14:57:31.100192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.526 pt2 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.526 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.527 "name": "raid_bdev1", 00:25:01.527 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:01.527 "strip_size_kb": 0, 00:25:01.527 "state": "online", 00:25:01.527 "raid_level": "raid1", 00:25:01.527 "superblock": true, 00:25:01.527 "num_base_bdevs": 2, 00:25:01.527 "num_base_bdevs_discovered": 2, 00:25:01.527 "num_base_bdevs_operational": 2, 00:25:01.527 "base_bdevs_list": [ 00:25:01.527 { 00:25:01.527 "name": "pt1", 00:25:01.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.527 "is_configured": true, 00:25:01.527 "data_offset": 256, 00:25:01.527 "data_size": 7936 00:25:01.527 }, 00:25:01.527 { 00:25:01.527 "name": "pt2", 00:25:01.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.527 "is_configured": true, 00:25:01.527 "data_offset": 256, 00:25:01.527 "data_size": 7936 00:25:01.527 } 00:25:01.527 ] 00:25:01.527 }' 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.527 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:01.785 [2024-11-04 14:57:31.578197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:01.785 "name": "raid_bdev1", 00:25:01.785 "aliases": [ 00:25:01.785 "25100960-0ea7-469e-9bd5-66da864726f7" 00:25:01.785 ], 00:25:01.785 "product_name": "Raid Volume", 00:25:01.785 "block_size": 4096, 00:25:01.785 "num_blocks": 7936, 00:25:01.785 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:01.785 "assigned_rate_limits": { 00:25:01.785 "rw_ios_per_sec": 0, 00:25:01.785 "rw_mbytes_per_sec": 0, 00:25:01.785 "r_mbytes_per_sec": 0, 00:25:01.785 "w_mbytes_per_sec": 0 00:25:01.785 }, 00:25:01.785 "claimed": false, 00:25:01.785 "zoned": false, 00:25:01.785 "supported_io_types": { 00:25:01.785 "read": true, 00:25:01.785 "write": true, 00:25:01.785 "unmap": false, 00:25:01.785 "flush": false, 00:25:01.785 "reset": true, 00:25:01.785 "nvme_admin": false, 00:25:01.785 "nvme_io": false, 00:25:01.785 "nvme_io_md": false, 00:25:01.785 "write_zeroes": true, 00:25:01.785 "zcopy": false, 00:25:01.785 "get_zone_info": false, 00:25:01.785 "zone_management": false, 00:25:01.785 "zone_append": false, 00:25:01.785 "compare": false, 00:25:01.785 "compare_and_write": false, 00:25:01.785 "abort": false, 00:25:01.785 "seek_hole": false, 00:25:01.785 "seek_data": false, 00:25:01.785 "copy": false, 00:25:01.785 "nvme_iov_md": false 00:25:01.785 }, 00:25:01.785 "memory_domains": [ 00:25:01.785 { 00:25:01.785 "dma_device_id": "system", 00:25:01.785 "dma_device_type": 1 00:25:01.785 }, 00:25:01.785 { 00:25:01.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.785 "dma_device_type": 2 00:25:01.785 }, 00:25:01.785 { 00:25:01.785 "dma_device_id": "system", 00:25:01.785 "dma_device_type": 1 00:25:01.785 }, 00:25:01.785 { 00:25:01.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.785 "dma_device_type": 2 00:25:01.785 } 00:25:01.785 ], 00:25:01.785 "driver_specific": { 00:25:01.785 "raid": { 00:25:01.785 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:01.785 "strip_size_kb": 0, 00:25:01.785 "state": "online", 00:25:01.785 "raid_level": "raid1", 00:25:01.785 "superblock": true, 00:25:01.785 "num_base_bdevs": 2, 00:25:01.785 "num_base_bdevs_discovered": 2, 00:25:01.785 "num_base_bdevs_operational": 2, 00:25:01.785 "base_bdevs_list": [ 00:25:01.785 { 00:25:01.785 "name": "pt1", 00:25:01.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.785 "is_configured": true, 00:25:01.785 "data_offset": 256, 00:25:01.785 "data_size": 7936 00:25:01.785 }, 00:25:01.785 { 00:25:01.785 "name": "pt2", 00:25:01.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.785 "is_configured": true, 00:25:01.785 "data_offset": 256, 00:25:01.785 "data_size": 7936 00:25:01.785 } 00:25:01.785 ] 00:25:01.785 } 00:25:01.785 } 00:25:01.785 }' 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:01.785 pt2' 00:25:01.785 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.043 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.044 [2024-11-04 14:57:31.846357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 25100960-0ea7-469e-9bd5-66da864726f7 '!=' 25100960-0ea7-469e-9bd5-66da864726f7 ']' 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.044 [2024-11-04 14:57:31.898150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.044 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.302 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.302 "name": "raid_bdev1", 00:25:02.302 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:02.302 "strip_size_kb": 0, 00:25:02.302 "state": "online", 00:25:02.302 "raid_level": "raid1", 00:25:02.302 "superblock": true, 00:25:02.302 "num_base_bdevs": 2, 00:25:02.302 "num_base_bdevs_discovered": 1, 00:25:02.302 "num_base_bdevs_operational": 1, 00:25:02.302 "base_bdevs_list": [ 00:25:02.302 { 00:25:02.302 "name": null, 00:25:02.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.302 "is_configured": false, 00:25:02.302 "data_offset": 0, 00:25:02.302 "data_size": 7936 00:25:02.302 }, 00:25:02.302 { 00:25:02.302 "name": "pt2", 00:25:02.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.302 "is_configured": true, 00:25:02.302 "data_offset": 256, 00:25:02.302 "data_size": 7936 00:25:02.302 } 00:25:02.302 ] 00:25:02.302 }' 00:25:02.302 14:57:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.302 14:57:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.560 [2024-11-04 14:57:32.422319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.560 [2024-11-04 14:57:32.422509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:02.560 [2024-11-04 14:57:32.422704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:02.560 [2024-11-04 14:57:32.422779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:02.560 [2024-11-04 14:57:32.422799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.560 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.819 [2024-11-04 14:57:32.494222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:02.819 [2024-11-04 14:57:32.494480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.819 [2024-11-04 14:57:32.494548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:02.819 [2024-11-04 14:57:32.494825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.819 [2024-11-04 14:57:32.497956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.819 [2024-11-04 14:57:32.498117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:02.819 [2024-11-04 14:57:32.498400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:02.819 [2024-11-04 14:57:32.498577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:02.819 [2024-11-04 14:57:32.498891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:02.819 pt2 00:25:02.819 [2024-11-04 14:57:32.499007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:02.819 [2024-11-04 14:57:32.499372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:02.819 [2024-11-04 14:57:32.499581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:02.819 [2024-11-04 14:57:32.499596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.819 [2024-11-04 14:57:32.499794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.819 "name": "raid_bdev1", 00:25:02.819 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:02.819 "strip_size_kb": 0, 00:25:02.819 "state": "online", 00:25:02.819 "raid_level": "raid1", 00:25:02.819 "superblock": true, 00:25:02.819 "num_base_bdevs": 2, 00:25:02.819 "num_base_bdevs_discovered": 1, 00:25:02.819 "num_base_bdevs_operational": 1, 00:25:02.819 "base_bdevs_list": [ 00:25:02.819 { 00:25:02.819 "name": null, 00:25:02.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.819 "is_configured": false, 00:25:02.819 "data_offset": 256, 00:25:02.819 "data_size": 7936 00:25:02.819 }, 00:25:02.819 { 00:25:02.819 "name": "pt2", 00:25:02.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.819 "is_configured": true, 00:25:02.819 "data_offset": 256, 00:25:02.819 "data_size": 7936 00:25:02.819 } 00:25:02.819 ] 00:25:02.819 }' 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.819 14:57:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 [2024-11-04 14:57:33.006660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.387 [2024-11-04 14:57:33.006699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.387 [2024-11-04 14:57:33.006805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.387 [2024-11-04 14:57:33.006897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.387 [2024-11-04 14:57:33.006912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 [2024-11-04 14:57:33.066722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:03.387 [2024-11-04 14:57:33.066994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.387 [2024-11-04 14:57:33.067073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:03.387 [2024-11-04 14:57:33.067273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.387 [2024-11-04 14:57:33.070637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.387 [2024-11-04 14:57:33.070675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:03.387 [2024-11-04 14:57:33.070816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:03.387 [2024-11-04 14:57:33.070896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:03.387 [2024-11-04 14:57:33.071186] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:03.387 [2024-11-04 14:57:33.071211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.387 [2024-11-04 14:57:33.071256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:03.387 [2024-11-04 14:57:33.071340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:03.387 pt1 00:25:03.387 [2024-11-04 14:57:33.071458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:03.387 [2024-11-04 14:57:33.071480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:03.387 [2024-11-04 14:57:33.071814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.387 [2024-11-04 14:57:33.072017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:03.387 [2024-11-04 14:57:33.072036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:03.387 [2024-11-04 14:57:33.072225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.387 "name": "raid_bdev1", 00:25:03.387 "uuid": "25100960-0ea7-469e-9bd5-66da864726f7", 00:25:03.387 "strip_size_kb": 0, 00:25:03.387 "state": "online", 00:25:03.387 "raid_level": "raid1", 00:25:03.387 "superblock": true, 00:25:03.387 "num_base_bdevs": 2, 00:25:03.387 "num_base_bdevs_discovered": 1, 00:25:03.387 "num_base_bdevs_operational": 1, 00:25:03.387 "base_bdevs_list": [ 00:25:03.387 { 00:25:03.387 "name": null, 00:25:03.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.387 "is_configured": false, 00:25:03.387 "data_offset": 256, 00:25:03.387 "data_size": 7936 00:25:03.387 }, 00:25:03.387 { 00:25:03.387 "name": "pt2", 00:25:03.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.387 "is_configured": true, 00:25:03.387 "data_offset": 256, 00:25:03.387 "data_size": 7936 00:25:03.387 } 00:25:03.387 ] 00:25:03.387 }' 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.387 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.954 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:03.954 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.954 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.954 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:03.954 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.954 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.955 [2024-11-04 14:57:33.659447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 25100960-0ea7-469e-9bd5-66da864726f7 '!=' 25100960-0ea7-469e-9bd5-66da864726f7 ']' 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86755 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86755 ']' 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86755 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86755 00:25:03.955 killing process with pid 86755 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86755' 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86755 00:25:03.955 [2024-11-04 14:57:33.739776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:03.955 14:57:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86755 00:25:03.955 [2024-11-04 14:57:33.739899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.955 [2024-11-04 14:57:33.739970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.955 [2024-11-04 14:57:33.739994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:04.213 [2024-11-04 14:57:33.917742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.149 14:57:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:25:05.149 00:25:05.149 real 0m6.652s 00:25:05.149 user 0m10.369s 00:25:05.149 sys 0m1.075s 00:25:05.149 14:57:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:05.149 ************************************ 00:25:05.149 END TEST raid_superblock_test_4k 00:25:05.149 ************************************ 00:25:05.149 14:57:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:25:05.408 14:57:35 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:25:05.408 14:57:35 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:25:05.408 14:57:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:05.408 14:57:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:05.408 14:57:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.408 ************************************ 00:25:05.408 START TEST raid_rebuild_test_sb_4k 00:25:05.408 ************************************ 00:25:05.408 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:25:05.408 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:05.408 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:05.408 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:05.408 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:05.408 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:05.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87078 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87078 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 87078 ']' 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.409 14:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:05.409 [2024-11-04 14:57:35.226858] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:25:05.409 [2024-11-04 14:57:35.227411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:25:05.409 Zero copy mechanism will not be used. 00:25:05.409 -allocations --file-prefix=spdk_pid87078 ] 00:25:05.668 [2024-11-04 14:57:35.410274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.668 [2024-11-04 14:57:35.547890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.926 [2024-11-04 14:57:35.760845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.926 [2024-11-04 14:57:35.761053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.493 BaseBdev1_malloc 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.493 [2024-11-04 14:57:36.256993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:06.493 [2024-11-04 14:57:36.257313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.493 [2024-11-04 14:57:36.257439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:06.493 [2024-11-04 14:57:36.257694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.493 [2024-11-04 14:57:36.261963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.493 BaseBdev1 00:25:06.493 [2024-11-04 14:57:36.262206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.493 BaseBdev2_malloc 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.493 [2024-11-04 14:57:36.312495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:06.493 [2024-11-04 14:57:36.312795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.493 [2024-11-04 14:57:36.312884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:06.493 [2024-11-04 14:57:36.313051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.493 [2024-11-04 14:57:36.316122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.493 [2024-11-04 14:57:36.316183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:06.493 BaseBdev2 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.493 spare_malloc 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.493 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.753 spare_delay 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.753 [2024-11-04 14:57:36.387723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:06.753 [2024-11-04 14:57:36.387908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.753 [2024-11-04 14:57:36.388055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:06.753 [2024-11-04 14:57:36.388162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.753 [2024-11-04 14:57:36.391324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.753 [2024-11-04 14:57:36.391370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:06.753 spare 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.753 [2024-11-04 14:57:36.395976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.753 [2024-11-04 14:57:36.398748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:06.753 [2024-11-04 14:57:36.399111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:06.753 [2024-11-04 14:57:36.399309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:06.753 [2024-11-04 14:57:36.399695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:06.753 [2024-11-04 14:57:36.399927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:06.753 [2024-11-04 14:57:36.399942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:06.753 [2024-11-04 14:57:36.400166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.753 "name": "raid_bdev1", 00:25:06.753 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:06.753 "strip_size_kb": 0, 00:25:06.753 "state": "online", 00:25:06.753 "raid_level": "raid1", 00:25:06.753 "superblock": true, 00:25:06.753 "num_base_bdevs": 2, 00:25:06.753 "num_base_bdevs_discovered": 2, 00:25:06.753 "num_base_bdevs_operational": 2, 00:25:06.753 "base_bdevs_list": [ 00:25:06.753 { 00:25:06.753 "name": "BaseBdev1", 00:25:06.753 "uuid": "5056e27a-b8cb-5557-9cc9-a5cb18670f3e", 00:25:06.753 "is_configured": true, 00:25:06.753 "data_offset": 256, 00:25:06.753 "data_size": 7936 00:25:06.753 }, 00:25:06.753 { 00:25:06.753 "name": "BaseBdev2", 00:25:06.753 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:06.753 "is_configured": true, 00:25:06.753 "data_offset": 256, 00:25:06.753 "data_size": 7936 00:25:06.753 } 00:25:06.753 ] 00:25:06.753 }' 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.753 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 [2024-11-04 14:57:36.928902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 14:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:07.323 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:07.580 [2024-11-04 14:57:37.316606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:07.580 /dev/nbd0 00:25:07.580 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:07.580 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:07.580 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:07.580 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:25:07.580 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:07.580 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:07.581 1+0 records in 00:25:07.581 1+0 records out 00:25:07.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466579 s, 8.8 MB/s 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:07.581 14:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:08.516 7936+0 records in 00:25:08.516 7936+0 records out 00:25:08.516 32505856 bytes (33 MB, 31 MiB) copied, 0.947468 s, 34.3 MB/s 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:08.516 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:08.774 [2024-11-04 14:57:38.615901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:08.774 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:08.775 [2024-11-04 14:57:38.628944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:08.775 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.033 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.033 "name": "raid_bdev1", 00:25:09.033 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:09.033 "strip_size_kb": 0, 00:25:09.033 "state": "online", 00:25:09.033 "raid_level": "raid1", 00:25:09.033 "superblock": true, 00:25:09.033 "num_base_bdevs": 2, 00:25:09.033 "num_base_bdevs_discovered": 1, 00:25:09.033 "num_base_bdevs_operational": 1, 00:25:09.033 "base_bdevs_list": [ 00:25:09.033 { 00:25:09.033 "name": null, 00:25:09.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.033 "is_configured": false, 00:25:09.033 "data_offset": 0, 00:25:09.033 "data_size": 7936 00:25:09.033 }, 00:25:09.033 { 00:25:09.033 "name": "BaseBdev2", 00:25:09.033 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:09.033 "is_configured": true, 00:25:09.033 "data_offset": 256, 00:25:09.033 "data_size": 7936 00:25:09.033 } 00:25:09.033 ] 00:25:09.033 }' 00:25:09.033 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.033 14:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:09.291 14:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:09.291 14:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.291 14:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:09.291 [2024-11-04 14:57:39.093129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:09.291 [2024-11-04 14:57:39.111053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:25:09.291 14:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.291 14:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:09.291 [2024-11-04 14:57:39.113715] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:10.666 "name": "raid_bdev1", 00:25:10.666 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:10.666 "strip_size_kb": 0, 00:25:10.666 "state": "online", 00:25:10.666 "raid_level": "raid1", 00:25:10.666 "superblock": true, 00:25:10.666 "num_base_bdevs": 2, 00:25:10.666 "num_base_bdevs_discovered": 2, 00:25:10.666 "num_base_bdevs_operational": 2, 00:25:10.666 "process": { 00:25:10.666 "type": "rebuild", 00:25:10.666 "target": "spare", 00:25:10.666 "progress": { 00:25:10.666 "blocks": 2560, 00:25:10.666 "percent": 32 00:25:10.666 } 00:25:10.666 }, 00:25:10.666 "base_bdevs_list": [ 00:25:10.666 { 00:25:10.666 "name": "spare", 00:25:10.666 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:10.666 "is_configured": true, 00:25:10.666 "data_offset": 256, 00:25:10.666 "data_size": 7936 00:25:10.666 }, 00:25:10.666 { 00:25:10.666 "name": "BaseBdev2", 00:25:10.666 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:10.666 "is_configured": true, 00:25:10.666 "data_offset": 256, 00:25:10.666 "data_size": 7936 00:25:10.666 } 00:25:10.666 ] 00:25:10.666 }' 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:10.666 [2024-11-04 14:57:40.287963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:10.666 [2024-11-04 14:57:40.325729] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:10.666 [2024-11-04 14:57:40.325828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.666 [2024-11-04 14:57:40.325852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:10.666 [2024-11-04 14:57:40.325868] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:10.666 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.667 "name": "raid_bdev1", 00:25:10.667 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:10.667 "strip_size_kb": 0, 00:25:10.667 "state": "online", 00:25:10.667 "raid_level": "raid1", 00:25:10.667 "superblock": true, 00:25:10.667 "num_base_bdevs": 2, 00:25:10.667 "num_base_bdevs_discovered": 1, 00:25:10.667 "num_base_bdevs_operational": 1, 00:25:10.667 "base_bdevs_list": [ 00:25:10.667 { 00:25:10.667 "name": null, 00:25:10.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.667 "is_configured": false, 00:25:10.667 "data_offset": 0, 00:25:10.667 "data_size": 7936 00:25:10.667 }, 00:25:10.667 { 00:25:10.667 "name": "BaseBdev2", 00:25:10.667 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:10.667 "is_configured": true, 00:25:10.667 "data_offset": 256, 00:25:10.667 "data_size": 7936 00:25:10.667 } 00:25:10.667 ] 00:25:10.667 }' 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.667 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:11.233 "name": "raid_bdev1", 00:25:11.233 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:11.233 "strip_size_kb": 0, 00:25:11.233 "state": "online", 00:25:11.233 "raid_level": "raid1", 00:25:11.233 "superblock": true, 00:25:11.233 "num_base_bdevs": 2, 00:25:11.233 "num_base_bdevs_discovered": 1, 00:25:11.233 "num_base_bdevs_operational": 1, 00:25:11.233 "base_bdevs_list": [ 00:25:11.233 { 00:25:11.233 "name": null, 00:25:11.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.233 "is_configured": false, 00:25:11.233 "data_offset": 0, 00:25:11.233 "data_size": 7936 00:25:11.233 }, 00:25:11.233 { 00:25:11.233 "name": "BaseBdev2", 00:25:11.233 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:11.233 "is_configured": true, 00:25:11.233 "data_offset": 256, 00:25:11.233 "data_size": 7936 00:25:11.233 } 00:25:11.233 ] 00:25:11.233 }' 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:11.233 14:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:11.233 14:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:11.233 14:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:11.233 14:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.233 14:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:11.233 [2024-11-04 14:57:41.042681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.233 [2024-11-04 14:57:41.061040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:25:11.233 14:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.233 14:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:11.233 [2024-11-04 14:57:41.064123] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:12.611 "name": "raid_bdev1", 00:25:12.611 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:12.611 "strip_size_kb": 0, 00:25:12.611 "state": "online", 00:25:12.611 "raid_level": "raid1", 00:25:12.611 "superblock": true, 00:25:12.611 "num_base_bdevs": 2, 00:25:12.611 "num_base_bdevs_discovered": 2, 00:25:12.611 "num_base_bdevs_operational": 2, 00:25:12.611 "process": { 00:25:12.611 "type": "rebuild", 00:25:12.611 "target": "spare", 00:25:12.611 "progress": { 00:25:12.611 "blocks": 2560, 00:25:12.611 "percent": 32 00:25:12.611 } 00:25:12.611 }, 00:25:12.611 "base_bdevs_list": [ 00:25:12.611 { 00:25:12.611 "name": "spare", 00:25:12.611 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:12.611 "is_configured": true, 00:25:12.611 "data_offset": 256, 00:25:12.611 "data_size": 7936 00:25:12.611 }, 00:25:12.611 { 00:25:12.611 "name": "BaseBdev2", 00:25:12.611 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:12.611 "is_configured": true, 00:25:12.611 "data_offset": 256, 00:25:12.611 "data_size": 7936 00:25:12.611 } 00:25:12.611 ] 00:25:12.611 }' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:12.611 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=744 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:12.611 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:12.612 "name": "raid_bdev1", 00:25:12.612 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:12.612 "strip_size_kb": 0, 00:25:12.612 "state": "online", 00:25:12.612 "raid_level": "raid1", 00:25:12.612 "superblock": true, 00:25:12.612 "num_base_bdevs": 2, 00:25:12.612 "num_base_bdevs_discovered": 2, 00:25:12.612 "num_base_bdevs_operational": 2, 00:25:12.612 "process": { 00:25:12.612 "type": "rebuild", 00:25:12.612 "target": "spare", 00:25:12.612 "progress": { 00:25:12.612 "blocks": 2816, 00:25:12.612 "percent": 35 00:25:12.612 } 00:25:12.612 }, 00:25:12.612 "base_bdevs_list": [ 00:25:12.612 { 00:25:12.612 "name": "spare", 00:25:12.612 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:12.612 "is_configured": true, 00:25:12.612 "data_offset": 256, 00:25:12.612 "data_size": 7936 00:25:12.612 }, 00:25:12.612 { 00:25:12.612 "name": "BaseBdev2", 00:25:12.612 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:12.612 "is_configured": true, 00:25:12.612 "data_offset": 256, 00:25:12.612 "data_size": 7936 00:25:12.612 } 00:25:12.612 ] 00:25:12.612 }' 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.612 14:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:13.547 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.806 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:13.806 "name": "raid_bdev1", 00:25:13.806 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:13.806 "strip_size_kb": 0, 00:25:13.806 "state": "online", 00:25:13.806 "raid_level": "raid1", 00:25:13.806 "superblock": true, 00:25:13.806 "num_base_bdevs": 2, 00:25:13.806 "num_base_bdevs_discovered": 2, 00:25:13.806 "num_base_bdevs_operational": 2, 00:25:13.806 "process": { 00:25:13.806 "type": "rebuild", 00:25:13.806 "target": "spare", 00:25:13.806 "progress": { 00:25:13.806 "blocks": 5888, 00:25:13.806 "percent": 74 00:25:13.806 } 00:25:13.806 }, 00:25:13.806 "base_bdevs_list": [ 00:25:13.806 { 00:25:13.806 "name": "spare", 00:25:13.806 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:13.806 "is_configured": true, 00:25:13.806 "data_offset": 256, 00:25:13.806 "data_size": 7936 00:25:13.806 }, 00:25:13.806 { 00:25:13.806 "name": "BaseBdev2", 00:25:13.806 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:13.806 "is_configured": true, 00:25:13.806 "data_offset": 256, 00:25:13.806 "data_size": 7936 00:25:13.806 } 00:25:13.806 ] 00:25:13.806 }' 00:25:13.806 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:13.806 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.806 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:13.806 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.806 14:57:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:14.372 [2024-11-04 14:57:44.194162] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:14.372 [2024-11-04 14:57:44.194306] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:14.372 [2024-11-04 14:57:44.194483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:14.939 "name": "raid_bdev1", 00:25:14.939 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:14.939 "strip_size_kb": 0, 00:25:14.939 "state": "online", 00:25:14.939 "raid_level": "raid1", 00:25:14.939 "superblock": true, 00:25:14.939 "num_base_bdevs": 2, 00:25:14.939 "num_base_bdevs_discovered": 2, 00:25:14.939 "num_base_bdevs_operational": 2, 00:25:14.939 "base_bdevs_list": [ 00:25:14.939 { 00:25:14.939 "name": "spare", 00:25:14.939 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:14.939 "is_configured": true, 00:25:14.939 "data_offset": 256, 00:25:14.939 "data_size": 7936 00:25:14.939 }, 00:25:14.939 { 00:25:14.939 "name": "BaseBdev2", 00:25:14.939 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:14.939 "is_configured": true, 00:25:14.939 "data_offset": 256, 00:25:14.939 "data_size": 7936 00:25:14.939 } 00:25:14.939 ] 00:25:14.939 }' 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:14.939 "name": "raid_bdev1", 00:25:14.939 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:14.939 "strip_size_kb": 0, 00:25:14.939 "state": "online", 00:25:14.939 "raid_level": "raid1", 00:25:14.939 "superblock": true, 00:25:14.939 "num_base_bdevs": 2, 00:25:14.939 "num_base_bdevs_discovered": 2, 00:25:14.939 "num_base_bdevs_operational": 2, 00:25:14.939 "base_bdevs_list": [ 00:25:14.939 { 00:25:14.939 "name": "spare", 00:25:14.939 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:14.939 "is_configured": true, 00:25:14.939 "data_offset": 256, 00:25:14.939 "data_size": 7936 00:25:14.939 }, 00:25:14.939 { 00:25:14.939 "name": "BaseBdev2", 00:25:14.939 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:14.939 "is_configured": true, 00:25:14.939 "data_offset": 256, 00:25:14.939 "data_size": 7936 00:25:14.939 } 00:25:14.939 ] 00:25:14.939 }' 00:25:14.939 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:14.940 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:14.940 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.198 "name": "raid_bdev1", 00:25:15.198 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:15.198 "strip_size_kb": 0, 00:25:15.198 "state": "online", 00:25:15.198 "raid_level": "raid1", 00:25:15.198 "superblock": true, 00:25:15.198 "num_base_bdevs": 2, 00:25:15.198 "num_base_bdevs_discovered": 2, 00:25:15.198 "num_base_bdevs_operational": 2, 00:25:15.198 "base_bdevs_list": [ 00:25:15.198 { 00:25:15.198 "name": "spare", 00:25:15.198 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:15.198 "is_configured": true, 00:25:15.198 "data_offset": 256, 00:25:15.198 "data_size": 7936 00:25:15.198 }, 00:25:15.198 { 00:25:15.198 "name": "BaseBdev2", 00:25:15.198 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:15.198 "is_configured": true, 00:25:15.198 "data_offset": 256, 00:25:15.198 "data_size": 7936 00:25:15.198 } 00:25:15.198 ] 00:25:15.198 }' 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.198 14:57:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.765 [2024-11-04 14:57:45.386620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.765 [2024-11-04 14:57:45.386677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.765 [2024-11-04 14:57:45.386793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.765 [2024-11-04 14:57:45.386893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.765 [2024-11-04 14:57:45.386914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:15.765 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:16.024 /dev/nbd0 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:16.024 1+0 records in 00:25:16.024 1+0 records out 00:25:16.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023826 s, 17.2 MB/s 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:16.024 14:57:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:16.282 /dev/nbd1 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:16.282 1+0 records in 00:25:16.282 1+0 records out 00:25:16.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369011 s, 11.1 MB/s 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:16.282 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:16.540 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:16.799 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.059 [2024-11-04 14:57:46.904085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:17.059 [2024-11-04 14:57:46.904194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.059 [2024-11-04 14:57:46.904271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:17.059 [2024-11-04 14:57:46.904294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.059 [2024-11-04 14:57:46.907036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.059 [2024-11-04 14:57:46.907102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:17.059 [2024-11-04 14:57:46.907286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:17.059 [2024-11-04 14:57:46.907373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:17.059 [2024-11-04 14:57:46.907591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:17.059 spare 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.059 14:57:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.318 [2024-11-04 14:57:47.007730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:17.318 [2024-11-04 14:57:47.007787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:17.318 [2024-11-04 14:57:47.008179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:25:17.318 [2024-11-04 14:57:47.008430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:17.318 [2024-11-04 14:57:47.008461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:17.318 [2024-11-04 14:57:47.008687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.318 "name": "raid_bdev1", 00:25:17.318 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:17.318 "strip_size_kb": 0, 00:25:17.318 "state": "online", 00:25:17.318 "raid_level": "raid1", 00:25:17.318 "superblock": true, 00:25:17.318 "num_base_bdevs": 2, 00:25:17.318 "num_base_bdevs_discovered": 2, 00:25:17.318 "num_base_bdevs_operational": 2, 00:25:17.318 "base_bdevs_list": [ 00:25:17.318 { 00:25:17.318 "name": "spare", 00:25:17.318 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:17.318 "is_configured": true, 00:25:17.318 "data_offset": 256, 00:25:17.318 "data_size": 7936 00:25:17.318 }, 00:25:17.318 { 00:25:17.318 "name": "BaseBdev2", 00:25:17.318 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:17.318 "is_configured": true, 00:25:17.318 "data_offset": 256, 00:25:17.318 "data_size": 7936 00:25:17.318 } 00:25:17.318 ] 00:25:17.318 }' 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.318 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:17.885 "name": "raid_bdev1", 00:25:17.885 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:17.885 "strip_size_kb": 0, 00:25:17.885 "state": "online", 00:25:17.885 "raid_level": "raid1", 00:25:17.885 "superblock": true, 00:25:17.885 "num_base_bdevs": 2, 00:25:17.885 "num_base_bdevs_discovered": 2, 00:25:17.885 "num_base_bdevs_operational": 2, 00:25:17.885 "base_bdevs_list": [ 00:25:17.885 { 00:25:17.885 "name": "spare", 00:25:17.885 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:17.885 "is_configured": true, 00:25:17.885 "data_offset": 256, 00:25:17.885 "data_size": 7936 00:25:17.885 }, 00:25:17.885 { 00:25:17.885 "name": "BaseBdev2", 00:25:17.885 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:17.885 "is_configured": true, 00:25:17.885 "data_offset": 256, 00:25:17.885 "data_size": 7936 00:25:17.885 } 00:25:17.885 ] 00:25:17.885 }' 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.885 [2024-11-04 14:57:47.752905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:17.885 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.143 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.143 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.143 "name": "raid_bdev1", 00:25:18.143 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:18.143 "strip_size_kb": 0, 00:25:18.143 "state": "online", 00:25:18.143 "raid_level": "raid1", 00:25:18.143 "superblock": true, 00:25:18.143 "num_base_bdevs": 2, 00:25:18.143 "num_base_bdevs_discovered": 1, 00:25:18.143 "num_base_bdevs_operational": 1, 00:25:18.143 "base_bdevs_list": [ 00:25:18.143 { 00:25:18.143 "name": null, 00:25:18.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.143 "is_configured": false, 00:25:18.143 "data_offset": 0, 00:25:18.143 "data_size": 7936 00:25:18.143 }, 00:25:18.143 { 00:25:18.143 "name": "BaseBdev2", 00:25:18.143 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:18.143 "is_configured": true, 00:25:18.143 "data_offset": 256, 00:25:18.143 "data_size": 7936 00:25:18.143 } 00:25:18.143 ] 00:25:18.143 }' 00:25:18.143 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.143 14:57:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:18.401 14:57:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:18.401 14:57:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.401 14:57:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:18.401 [2024-11-04 14:57:48.269087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:18.401 [2024-11-04 14:57:48.269421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:18.401 [2024-11-04 14:57:48.269448] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:18.401 [2024-11-04 14:57:48.269498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:18.401 [2024-11-04 14:57:48.286267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:25:18.401 14:57:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.401 14:57:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:18.401 [2024-11-04 14:57:48.289071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:19.782 "name": "raid_bdev1", 00:25:19.782 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:19.782 "strip_size_kb": 0, 00:25:19.782 "state": "online", 00:25:19.782 "raid_level": "raid1", 00:25:19.782 "superblock": true, 00:25:19.782 "num_base_bdevs": 2, 00:25:19.782 "num_base_bdevs_discovered": 2, 00:25:19.782 "num_base_bdevs_operational": 2, 00:25:19.782 "process": { 00:25:19.782 "type": "rebuild", 00:25:19.782 "target": "spare", 00:25:19.782 "progress": { 00:25:19.782 "blocks": 2304, 00:25:19.782 "percent": 29 00:25:19.782 } 00:25:19.782 }, 00:25:19.782 "base_bdevs_list": [ 00:25:19.782 { 00:25:19.782 "name": "spare", 00:25:19.782 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:19.782 "is_configured": true, 00:25:19.782 "data_offset": 256, 00:25:19.782 "data_size": 7936 00:25:19.782 }, 00:25:19.782 { 00:25:19.782 "name": "BaseBdev2", 00:25:19.782 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:19.782 "is_configured": true, 00:25:19.782 "data_offset": 256, 00:25:19.782 "data_size": 7936 00:25:19.782 } 00:25:19.782 ] 00:25:19.782 }' 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 [2024-11-04 14:57:49.458888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:19.782 [2024-11-04 14:57:49.500869] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:19.782 [2024-11-04 14:57:49.500976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.782 [2024-11-04 14:57:49.501001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:19.782 [2024-11-04 14:57:49.501017] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.782 "name": "raid_bdev1", 00:25:19.782 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:19.782 "strip_size_kb": 0, 00:25:19.782 "state": "online", 00:25:19.782 "raid_level": "raid1", 00:25:19.782 "superblock": true, 00:25:19.782 "num_base_bdevs": 2, 00:25:19.782 "num_base_bdevs_discovered": 1, 00:25:19.782 "num_base_bdevs_operational": 1, 00:25:19.782 "base_bdevs_list": [ 00:25:19.782 { 00:25:19.782 "name": null, 00:25:19.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.782 "is_configured": false, 00:25:19.782 "data_offset": 0, 00:25:19.782 "data_size": 7936 00:25:19.782 }, 00:25:19.782 { 00:25:19.782 "name": "BaseBdev2", 00:25:19.782 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:19.782 "is_configured": true, 00:25:19.782 "data_offset": 256, 00:25:19.782 "data_size": 7936 00:25:19.782 } 00:25:19.782 ] 00:25:19.782 }' 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.782 14:57:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:20.348 14:57:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:20.348 14:57:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.348 14:57:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:20.348 [2024-11-04 14:57:50.051612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:20.348 [2024-11-04 14:57:50.051740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.348 [2024-11-04 14:57:50.051779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:20.348 [2024-11-04 14:57:50.051799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.348 [2024-11-04 14:57:50.052521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.348 [2024-11-04 14:57:50.052564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:20.348 [2024-11-04 14:57:50.052698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:20.348 [2024-11-04 14:57:50.052729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:20.348 [2024-11-04 14:57:50.052756] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:20.348 [2024-11-04 14:57:50.052793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:20.348 [2024-11-04 14:57:50.069951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:25:20.348 spare 00:25:20.348 14:57:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.348 14:57:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:20.348 [2024-11-04 14:57:50.072790] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:21.283 "name": "raid_bdev1", 00:25:21.283 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:21.283 "strip_size_kb": 0, 00:25:21.283 "state": "online", 00:25:21.283 "raid_level": "raid1", 00:25:21.283 "superblock": true, 00:25:21.283 "num_base_bdevs": 2, 00:25:21.283 "num_base_bdevs_discovered": 2, 00:25:21.283 "num_base_bdevs_operational": 2, 00:25:21.283 "process": { 00:25:21.283 "type": "rebuild", 00:25:21.283 "target": "spare", 00:25:21.283 "progress": { 00:25:21.283 "blocks": 2560, 00:25:21.283 "percent": 32 00:25:21.283 } 00:25:21.283 }, 00:25:21.283 "base_bdevs_list": [ 00:25:21.283 { 00:25:21.283 "name": "spare", 00:25:21.283 "uuid": "7d94b9a0-9e20-55e5-8cd1-26b64b10df9f", 00:25:21.283 "is_configured": true, 00:25:21.283 "data_offset": 256, 00:25:21.283 "data_size": 7936 00:25:21.283 }, 00:25:21.283 { 00:25:21.283 "name": "BaseBdev2", 00:25:21.283 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:21.283 "is_configured": true, 00:25:21.283 "data_offset": 256, 00:25:21.283 "data_size": 7936 00:25:21.283 } 00:25:21.283 ] 00:25:21.283 }' 00:25:21.283 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:21.541 [2024-11-04 14:57:51.238738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.541 [2024-11-04 14:57:51.284382] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:21.541 [2024-11-04 14:57:51.284491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.541 [2024-11-04 14:57:51.284537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:21.541 [2024-11-04 14:57:51.284550] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.541 "name": "raid_bdev1", 00:25:21.541 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:21.541 "strip_size_kb": 0, 00:25:21.541 "state": "online", 00:25:21.541 "raid_level": "raid1", 00:25:21.541 "superblock": true, 00:25:21.541 "num_base_bdevs": 2, 00:25:21.541 "num_base_bdevs_discovered": 1, 00:25:21.541 "num_base_bdevs_operational": 1, 00:25:21.541 "base_bdevs_list": [ 00:25:21.541 { 00:25:21.541 "name": null, 00:25:21.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.541 "is_configured": false, 00:25:21.541 "data_offset": 0, 00:25:21.541 "data_size": 7936 00:25:21.541 }, 00:25:21.541 { 00:25:21.541 "name": "BaseBdev2", 00:25:21.541 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:21.541 "is_configured": true, 00:25:21.541 "data_offset": 256, 00:25:21.541 "data_size": 7936 00:25:21.541 } 00:25:21.541 ] 00:25:21.541 }' 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.541 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.107 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.108 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.108 "name": "raid_bdev1", 00:25:22.108 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:22.108 "strip_size_kb": 0, 00:25:22.108 "state": "online", 00:25:22.108 "raid_level": "raid1", 00:25:22.108 "superblock": true, 00:25:22.108 "num_base_bdevs": 2, 00:25:22.108 "num_base_bdevs_discovered": 1, 00:25:22.108 "num_base_bdevs_operational": 1, 00:25:22.108 "base_bdevs_list": [ 00:25:22.108 { 00:25:22.108 "name": null, 00:25:22.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.108 "is_configured": false, 00:25:22.108 "data_offset": 0, 00:25:22.108 "data_size": 7936 00:25:22.108 }, 00:25:22.108 { 00:25:22.108 "name": "BaseBdev2", 00:25:22.108 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:22.108 "is_configured": true, 00:25:22.108 "data_offset": 256, 00:25:22.108 "data_size": 7936 00:25:22.108 } 00:25:22.108 ] 00:25:22.108 }' 00:25:22.108 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.108 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:22.108 14:57:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:22.366 [2024-11-04 14:57:52.019311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:22.366 [2024-11-04 14:57:52.019386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.366 [2024-11-04 14:57:52.019423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:22.366 [2024-11-04 14:57:52.019470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.366 [2024-11-04 14:57:52.020168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.366 [2024-11-04 14:57:52.020204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:22.366 [2024-11-04 14:57:52.020353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:22.366 [2024-11-04 14:57:52.020378] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:22.366 [2024-11-04 14:57:52.020395] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:22.366 [2024-11-04 14:57:52.020411] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:22.366 BaseBdev1 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.366 14:57:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.300 "name": "raid_bdev1", 00:25:23.300 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:23.300 "strip_size_kb": 0, 00:25:23.300 "state": "online", 00:25:23.300 "raid_level": "raid1", 00:25:23.300 "superblock": true, 00:25:23.300 "num_base_bdevs": 2, 00:25:23.300 "num_base_bdevs_discovered": 1, 00:25:23.300 "num_base_bdevs_operational": 1, 00:25:23.300 "base_bdevs_list": [ 00:25:23.300 { 00:25:23.300 "name": null, 00:25:23.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.300 "is_configured": false, 00:25:23.300 "data_offset": 0, 00:25:23.300 "data_size": 7936 00:25:23.300 }, 00:25:23.300 { 00:25:23.300 "name": "BaseBdev2", 00:25:23.300 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:23.300 "is_configured": true, 00:25:23.300 "data_offset": 256, 00:25:23.300 "data_size": 7936 00:25:23.300 } 00:25:23.300 ] 00:25:23.300 }' 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.300 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.866 "name": "raid_bdev1", 00:25:23.866 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:23.866 "strip_size_kb": 0, 00:25:23.866 "state": "online", 00:25:23.866 "raid_level": "raid1", 00:25:23.866 "superblock": true, 00:25:23.866 "num_base_bdevs": 2, 00:25:23.866 "num_base_bdevs_discovered": 1, 00:25:23.866 "num_base_bdevs_operational": 1, 00:25:23.866 "base_bdevs_list": [ 00:25:23.866 { 00:25:23.866 "name": null, 00:25:23.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.866 "is_configured": false, 00:25:23.866 "data_offset": 0, 00:25:23.866 "data_size": 7936 00:25:23.866 }, 00:25:23.866 { 00:25:23.866 "name": "BaseBdev2", 00:25:23.866 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:23.866 "is_configured": true, 00:25:23.866 "data_offset": 256, 00:25:23.866 "data_size": 7936 00:25:23.866 } 00:25:23.866 ] 00:25:23.866 }' 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:23.866 [2024-11-04 14:57:53.699873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.866 [2024-11-04 14:57:53.700140] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:23.866 [2024-11-04 14:57:53.700177] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:23.866 request: 00:25:23.866 { 00:25:23.866 "base_bdev": "BaseBdev1", 00:25:23.866 "raid_bdev": "raid_bdev1", 00:25:23.866 "method": "bdev_raid_add_base_bdev", 00:25:23.866 "req_id": 1 00:25:23.866 } 00:25:23.866 Got JSON-RPC error response 00:25:23.866 response: 00:25:23.866 { 00:25:23.866 "code": -22, 00:25:23.866 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:23.866 } 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.866 14:57:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.238 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.239 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:25.239 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.239 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.239 "name": "raid_bdev1", 00:25:25.239 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:25.239 "strip_size_kb": 0, 00:25:25.239 "state": "online", 00:25:25.239 "raid_level": "raid1", 00:25:25.239 "superblock": true, 00:25:25.239 "num_base_bdevs": 2, 00:25:25.239 "num_base_bdevs_discovered": 1, 00:25:25.239 "num_base_bdevs_operational": 1, 00:25:25.239 "base_bdevs_list": [ 00:25:25.239 { 00:25:25.239 "name": null, 00:25:25.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.239 "is_configured": false, 00:25:25.239 "data_offset": 0, 00:25:25.239 "data_size": 7936 00:25:25.239 }, 00:25:25.239 { 00:25:25.239 "name": "BaseBdev2", 00:25:25.239 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:25.239 "is_configured": true, 00:25:25.239 "data_offset": 256, 00:25:25.239 "data_size": 7936 00:25:25.239 } 00:25:25.239 ] 00:25:25.239 }' 00:25:25.239 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.239 14:57:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.497 "name": "raid_bdev1", 00:25:25.497 "uuid": "8da6163f-d8c3-40fe-9fa9-97fb8d7aed55", 00:25:25.497 "strip_size_kb": 0, 00:25:25.497 "state": "online", 00:25:25.497 "raid_level": "raid1", 00:25:25.497 "superblock": true, 00:25:25.497 "num_base_bdevs": 2, 00:25:25.497 "num_base_bdevs_discovered": 1, 00:25:25.497 "num_base_bdevs_operational": 1, 00:25:25.497 "base_bdevs_list": [ 00:25:25.497 { 00:25:25.497 "name": null, 00:25:25.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.497 "is_configured": false, 00:25:25.497 "data_offset": 0, 00:25:25.497 "data_size": 7936 00:25:25.497 }, 00:25:25.497 { 00:25:25.497 "name": "BaseBdev2", 00:25:25.497 "uuid": "5f5ff6de-b778-51cc-b835-76e0267dfb51", 00:25:25.497 "is_configured": true, 00:25:25.497 "data_offset": 256, 00:25:25.497 "data_size": 7936 00:25:25.497 } 00:25:25.497 ] 00:25:25.497 }' 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:25.497 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87078 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 87078 ']' 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 87078 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87078 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:25.755 killing process with pid 87078 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87078' 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 87078 00:25:25.755 Received shutdown signal, test time was about 60.000000 seconds 00:25:25.755 00:25:25.755 Latency(us) 00:25:25.755 [2024-11-04T14:57:55.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.755 [2024-11-04T14:57:55.647Z] =================================================================================================================== 00:25:25.755 [2024-11-04T14:57:55.647Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:25.755 [2024-11-04 14:57:55.429611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.755 14:57:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 87078 00:25:25.755 [2024-11-04 14:57:55.429798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.755 [2024-11-04 14:57:55.429885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.755 [2024-11-04 14:57:55.429914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:26.013 [2024-11-04 14:57:55.723011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:27.394 14:57:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:25:27.394 00:25:27.394 real 0m21.772s 00:25:27.394 user 0m29.249s 00:25:27.394 sys 0m2.703s 00:25:27.394 14:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:27.394 ************************************ 00:25:27.394 END TEST raid_rebuild_test_sb_4k 00:25:27.394 ************************************ 00:25:27.394 14:57:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:27.394 14:57:56 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:25:27.394 14:57:56 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:25:27.394 14:57:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:27.394 14:57:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:27.394 14:57:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:27.394 ************************************ 00:25:27.394 START TEST raid_state_function_test_sb_md_separate 00:25:27.394 ************************************ 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87781 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87781' 00:25:27.394 Process raid pid: 87781 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87781 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87781 ']' 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.394 14:57:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:27.394 [2024-11-04 14:57:57.058950] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:25:27.394 [2024-11-04 14:57:57.059140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.394 [2024-11-04 14:57:57.245845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.706 [2024-11-04 14:57:57.393221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.964 [2024-11-04 14:57:57.616433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.964 [2024-11-04 14:57:57.616535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.222 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:28.222 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:25:28.222 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:28.222 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.222 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:28.222 [2024-11-04 14:57:58.111392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.222 [2024-11-04 14:57:58.111480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.222 [2024-11-04 14:57:58.111497] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.222 [2024-11-04 14:57:58.111514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.479 "name": "Existed_Raid", 00:25:28.479 "uuid": "5a5d0132-6da6-46a5-9ab5-b9e5d165e95e", 00:25:28.479 "strip_size_kb": 0, 00:25:28.479 "state": "configuring", 00:25:28.479 "raid_level": "raid1", 00:25:28.479 "superblock": true, 00:25:28.479 "num_base_bdevs": 2, 00:25:28.479 "num_base_bdevs_discovered": 0, 00:25:28.479 "num_base_bdevs_operational": 2, 00:25:28.479 "base_bdevs_list": [ 00:25:28.479 { 00:25:28.479 "name": "BaseBdev1", 00:25:28.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.479 "is_configured": false, 00:25:28.479 "data_offset": 0, 00:25:28.479 "data_size": 0 00:25:28.479 }, 00:25:28.479 { 00:25:28.479 "name": "BaseBdev2", 00:25:28.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.479 "is_configured": false, 00:25:28.479 "data_offset": 0, 00:25:28.479 "data_size": 0 00:25:28.479 } 00:25:28.479 ] 00:25:28.479 }' 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.479 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.043 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.044 [2024-11-04 14:57:58.651567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.044 [2024-11-04 14:57:58.651633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.044 [2024-11-04 14:57:58.659527] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.044 [2024-11-04 14:57:58.659590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.044 [2024-11-04 14:57:58.659615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.044 [2024-11-04 14:57:58.659643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.044 [2024-11-04 14:57:58.711050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.044 BaseBdev1 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.044 [ 00:25:29.044 { 00:25:29.044 "name": "BaseBdev1", 00:25:29.044 "aliases": [ 00:25:29.044 "8f48ee32-238c-4d94-ae14-81a7b392196f" 00:25:29.044 ], 00:25:29.044 "product_name": "Malloc disk", 00:25:29.044 "block_size": 4096, 00:25:29.044 "num_blocks": 8192, 00:25:29.044 "uuid": "8f48ee32-238c-4d94-ae14-81a7b392196f", 00:25:29.044 "md_size": 32, 00:25:29.044 "md_interleave": false, 00:25:29.044 "dif_type": 0, 00:25:29.044 "assigned_rate_limits": { 00:25:29.044 "rw_ios_per_sec": 0, 00:25:29.044 "rw_mbytes_per_sec": 0, 00:25:29.044 "r_mbytes_per_sec": 0, 00:25:29.044 "w_mbytes_per_sec": 0 00:25:29.044 }, 00:25:29.044 "claimed": true, 00:25:29.044 "claim_type": "exclusive_write", 00:25:29.044 "zoned": false, 00:25:29.044 "supported_io_types": { 00:25:29.044 "read": true, 00:25:29.044 "write": true, 00:25:29.044 "unmap": true, 00:25:29.044 "flush": true, 00:25:29.044 "reset": true, 00:25:29.044 "nvme_admin": false, 00:25:29.044 "nvme_io": false, 00:25:29.044 "nvme_io_md": false, 00:25:29.044 "write_zeroes": true, 00:25:29.044 "zcopy": true, 00:25:29.044 "get_zone_info": false, 00:25:29.044 "zone_management": false, 00:25:29.044 "zone_append": false, 00:25:29.044 "compare": false, 00:25:29.044 "compare_and_write": false, 00:25:29.044 "abort": true, 00:25:29.044 "seek_hole": false, 00:25:29.044 "seek_data": false, 00:25:29.044 "copy": true, 00:25:29.044 "nvme_iov_md": false 00:25:29.044 }, 00:25:29.044 "memory_domains": [ 00:25:29.044 { 00:25:29.044 "dma_device_id": "system", 00:25:29.044 "dma_device_type": 1 00:25:29.044 }, 00:25:29.044 { 00:25:29.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.044 "dma_device_type": 2 00:25:29.044 } 00:25:29.044 ], 00:25:29.044 "driver_specific": {} 00:25:29.044 } 00:25:29.044 ] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.044 "name": "Existed_Raid", 00:25:29.044 "uuid": "bc837f89-dff5-454a-aa2d-8a51e1b15c72", 00:25:29.044 "strip_size_kb": 0, 00:25:29.044 "state": "configuring", 00:25:29.044 "raid_level": "raid1", 00:25:29.044 "superblock": true, 00:25:29.044 "num_base_bdevs": 2, 00:25:29.044 "num_base_bdevs_discovered": 1, 00:25:29.044 "num_base_bdevs_operational": 2, 00:25:29.044 "base_bdevs_list": [ 00:25:29.044 { 00:25:29.044 "name": "BaseBdev1", 00:25:29.044 "uuid": "8f48ee32-238c-4d94-ae14-81a7b392196f", 00:25:29.044 "is_configured": true, 00:25:29.044 "data_offset": 256, 00:25:29.044 "data_size": 7936 00:25:29.044 }, 00:25:29.044 { 00:25:29.044 "name": "BaseBdev2", 00:25:29.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.044 "is_configured": false, 00:25:29.044 "data_offset": 0, 00:25:29.044 "data_size": 0 00:25:29.044 } 00:25:29.044 ] 00:25:29.044 }' 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.044 14:57:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.609 [2024-11-04 14:57:59.259373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.609 [2024-11-04 14:57:59.259454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.609 [2024-11-04 14:57:59.271386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.609 [2024-11-04 14:57:59.274156] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.609 [2024-11-04 14:57:59.274340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:29.609 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.610 "name": "Existed_Raid", 00:25:29.610 "uuid": "c48a88c7-7869-4c8e-8d2c-2836162c905a", 00:25:29.610 "strip_size_kb": 0, 00:25:29.610 "state": "configuring", 00:25:29.610 "raid_level": "raid1", 00:25:29.610 "superblock": true, 00:25:29.610 "num_base_bdevs": 2, 00:25:29.610 "num_base_bdevs_discovered": 1, 00:25:29.610 "num_base_bdevs_operational": 2, 00:25:29.610 "base_bdevs_list": [ 00:25:29.610 { 00:25:29.610 "name": "BaseBdev1", 00:25:29.610 "uuid": "8f48ee32-238c-4d94-ae14-81a7b392196f", 00:25:29.610 "is_configured": true, 00:25:29.610 "data_offset": 256, 00:25:29.610 "data_size": 7936 00:25:29.610 }, 00:25:29.610 { 00:25:29.610 "name": "BaseBdev2", 00:25:29.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.610 "is_configured": false, 00:25:29.610 "data_offset": 0, 00:25:29.610 "data_size": 0 00:25:29.610 } 00:25:29.610 ] 00:25:29.610 }' 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.610 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 [2024-11-04 14:57:59.824133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.174 [2024-11-04 14:57:59.824537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:30.174 [2024-11-04 14:57:59.824558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:30.174 [2024-11-04 14:57:59.824665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:30.174 BaseBdev2 00:25:30.174 [2024-11-04 14:57:59.824843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:30.174 [2024-11-04 14:57:59.824879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:30.174 [2024-11-04 14:57:59.825002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.174 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.174 [ 00:25:30.174 { 00:25:30.174 "name": "BaseBdev2", 00:25:30.174 "aliases": [ 00:25:30.174 "84c73e96-f45d-4208-9323-885bbec77655" 00:25:30.174 ], 00:25:30.174 "product_name": "Malloc disk", 00:25:30.174 "block_size": 4096, 00:25:30.174 "num_blocks": 8192, 00:25:30.174 "uuid": "84c73e96-f45d-4208-9323-885bbec77655", 00:25:30.174 "md_size": 32, 00:25:30.174 "md_interleave": false, 00:25:30.174 "dif_type": 0, 00:25:30.174 "assigned_rate_limits": { 00:25:30.174 "rw_ios_per_sec": 0, 00:25:30.174 "rw_mbytes_per_sec": 0, 00:25:30.174 "r_mbytes_per_sec": 0, 00:25:30.174 "w_mbytes_per_sec": 0 00:25:30.174 }, 00:25:30.174 "claimed": true, 00:25:30.174 "claim_type": "exclusive_write", 00:25:30.175 "zoned": false, 00:25:30.175 "supported_io_types": { 00:25:30.175 "read": true, 00:25:30.175 "write": true, 00:25:30.175 "unmap": true, 00:25:30.175 "flush": true, 00:25:30.175 "reset": true, 00:25:30.175 "nvme_admin": false, 00:25:30.175 "nvme_io": false, 00:25:30.175 "nvme_io_md": false, 00:25:30.175 "write_zeroes": true, 00:25:30.175 "zcopy": true, 00:25:30.175 "get_zone_info": false, 00:25:30.175 "zone_management": false, 00:25:30.175 "zone_append": false, 00:25:30.175 "compare": false, 00:25:30.175 "compare_and_write": false, 00:25:30.175 "abort": true, 00:25:30.175 "seek_hole": false, 00:25:30.175 "seek_data": false, 00:25:30.175 "copy": true, 00:25:30.175 "nvme_iov_md": false 00:25:30.175 }, 00:25:30.175 "memory_domains": [ 00:25:30.175 { 00:25:30.175 "dma_device_id": "system", 00:25:30.175 "dma_device_type": 1 00:25:30.175 }, 00:25:30.175 { 00:25:30.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.175 "dma_device_type": 2 00:25:30.175 } 00:25:30.175 ], 00:25:30.175 "driver_specific": {} 00:25:30.175 } 00:25:30.175 ] 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.175 "name": "Existed_Raid", 00:25:30.175 "uuid": "c48a88c7-7869-4c8e-8d2c-2836162c905a", 00:25:30.175 "strip_size_kb": 0, 00:25:30.175 "state": "online", 00:25:30.175 "raid_level": "raid1", 00:25:30.175 "superblock": true, 00:25:30.175 "num_base_bdevs": 2, 00:25:30.175 "num_base_bdevs_discovered": 2, 00:25:30.175 "num_base_bdevs_operational": 2, 00:25:30.175 "base_bdevs_list": [ 00:25:30.175 { 00:25:30.175 "name": "BaseBdev1", 00:25:30.175 "uuid": "8f48ee32-238c-4d94-ae14-81a7b392196f", 00:25:30.175 "is_configured": true, 00:25:30.175 "data_offset": 256, 00:25:30.175 "data_size": 7936 00:25:30.175 }, 00:25:30.175 { 00:25:30.175 "name": "BaseBdev2", 00:25:30.175 "uuid": "84c73e96-f45d-4208-9323-885bbec77655", 00:25:30.175 "is_configured": true, 00:25:30.175 "data_offset": 256, 00:25:30.175 "data_size": 7936 00:25:30.175 } 00:25:30.175 ] 00:25:30.175 }' 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.175 14:57:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.741 [2024-11-04 14:58:00.352926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.741 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:30.741 "name": "Existed_Raid", 00:25:30.741 "aliases": [ 00:25:30.741 "c48a88c7-7869-4c8e-8d2c-2836162c905a" 00:25:30.741 ], 00:25:30.741 "product_name": "Raid Volume", 00:25:30.741 "block_size": 4096, 00:25:30.741 "num_blocks": 7936, 00:25:30.741 "uuid": "c48a88c7-7869-4c8e-8d2c-2836162c905a", 00:25:30.741 "md_size": 32, 00:25:30.741 "md_interleave": false, 00:25:30.741 "dif_type": 0, 00:25:30.741 "assigned_rate_limits": { 00:25:30.741 "rw_ios_per_sec": 0, 00:25:30.741 "rw_mbytes_per_sec": 0, 00:25:30.741 "r_mbytes_per_sec": 0, 00:25:30.741 "w_mbytes_per_sec": 0 00:25:30.741 }, 00:25:30.741 "claimed": false, 00:25:30.741 "zoned": false, 00:25:30.741 "supported_io_types": { 00:25:30.741 "read": true, 00:25:30.741 "write": true, 00:25:30.741 "unmap": false, 00:25:30.741 "flush": false, 00:25:30.741 "reset": true, 00:25:30.741 "nvme_admin": false, 00:25:30.741 "nvme_io": false, 00:25:30.741 "nvme_io_md": false, 00:25:30.741 "write_zeroes": true, 00:25:30.741 "zcopy": false, 00:25:30.741 "get_zone_info": false, 00:25:30.741 "zone_management": false, 00:25:30.741 "zone_append": false, 00:25:30.741 "compare": false, 00:25:30.741 "compare_and_write": false, 00:25:30.741 "abort": false, 00:25:30.741 "seek_hole": false, 00:25:30.741 "seek_data": false, 00:25:30.741 "copy": false, 00:25:30.741 "nvme_iov_md": false 00:25:30.741 }, 00:25:30.741 "memory_domains": [ 00:25:30.741 { 00:25:30.741 "dma_device_id": "system", 00:25:30.741 "dma_device_type": 1 00:25:30.741 }, 00:25:30.741 { 00:25:30.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.741 "dma_device_type": 2 00:25:30.741 }, 00:25:30.741 { 00:25:30.741 "dma_device_id": "system", 00:25:30.741 "dma_device_type": 1 00:25:30.741 }, 00:25:30.741 { 00:25:30.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.741 "dma_device_type": 2 00:25:30.741 } 00:25:30.741 ], 00:25:30.741 "driver_specific": { 00:25:30.741 "raid": { 00:25:30.741 "uuid": "c48a88c7-7869-4c8e-8d2c-2836162c905a", 00:25:30.741 "strip_size_kb": 0, 00:25:30.741 "state": "online", 00:25:30.741 "raid_level": "raid1", 00:25:30.741 "superblock": true, 00:25:30.741 "num_base_bdevs": 2, 00:25:30.741 "num_base_bdevs_discovered": 2, 00:25:30.741 "num_base_bdevs_operational": 2, 00:25:30.741 "base_bdevs_list": [ 00:25:30.741 { 00:25:30.741 "name": "BaseBdev1", 00:25:30.741 "uuid": "8f48ee32-238c-4d94-ae14-81a7b392196f", 00:25:30.741 "is_configured": true, 00:25:30.742 "data_offset": 256, 00:25:30.742 "data_size": 7936 00:25:30.742 }, 00:25:30.742 { 00:25:30.742 "name": "BaseBdev2", 00:25:30.742 "uuid": "84c73e96-f45d-4208-9323-885bbec77655", 00:25:30.742 "is_configured": true, 00:25:30.742 "data_offset": 256, 00:25:30.742 "data_size": 7936 00:25:30.742 } 00:25:30.742 ] 00:25:30.742 } 00:25:30.742 } 00:25:30.742 }' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:30.742 BaseBdev2' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.742 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.742 [2024-11-04 14:58:00.624651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.000 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.000 "name": "Existed_Raid", 00:25:31.000 "uuid": "c48a88c7-7869-4c8e-8d2c-2836162c905a", 00:25:31.000 "strip_size_kb": 0, 00:25:31.000 "state": "online", 00:25:31.000 "raid_level": "raid1", 00:25:31.000 "superblock": true, 00:25:31.000 "num_base_bdevs": 2, 00:25:31.000 "num_base_bdevs_discovered": 1, 00:25:31.000 "num_base_bdevs_operational": 1, 00:25:31.000 "base_bdevs_list": [ 00:25:31.000 { 00:25:31.000 "name": null, 00:25:31.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.000 "is_configured": false, 00:25:31.000 "data_offset": 0, 00:25:31.000 "data_size": 7936 00:25:31.000 }, 00:25:31.000 { 00:25:31.000 "name": "BaseBdev2", 00:25:31.000 "uuid": "84c73e96-f45d-4208-9323-885bbec77655", 00:25:31.000 "is_configured": true, 00:25:31.000 "data_offset": 256, 00:25:31.001 "data_size": 7936 00:25:31.001 } 00:25:31.001 ] 00:25:31.001 }' 00:25:31.001 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.001 14:58:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.567 [2024-11-04 14:58:01.313804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:31.567 [2024-11-04 14:58:01.313976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:31.567 [2024-11-04 14:58:01.413093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:31.567 [2024-11-04 14:58:01.413171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:31.567 [2024-11-04 14:58:01.413192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:31.567 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87781 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87781 ']' 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87781 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87781 00:25:31.826 killing process with pid 87781 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87781' 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87781 00:25:31.826 [2024-11-04 14:58:01.510682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:31.826 14:58:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87781 00:25:31.826 [2024-11-04 14:58:01.527105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:33.202 14:58:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:25:33.202 00:25:33.202 real 0m5.722s 00:25:33.202 user 0m8.512s 00:25:33.202 sys 0m0.901s 00:25:33.202 14:58:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:33.202 ************************************ 00:25:33.202 14:58:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:33.202 END TEST raid_state_function_test_sb_md_separate 00:25:33.202 ************************************ 00:25:33.202 14:58:02 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:25:33.203 14:58:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:33.203 14:58:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:33.203 14:58:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:33.203 ************************************ 00:25:33.203 START TEST raid_superblock_test_md_separate 00:25:33.203 ************************************ 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88039 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88039 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88039 ']' 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:33.203 14:58:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:33.203 [2024-11-04 14:58:02.835963] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:25:33.203 [2024-11-04 14:58:02.836414] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88039 ] 00:25:33.203 [2024-11-04 14:58:03.022807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.461 [2024-11-04 14:58:03.177483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.719 [2024-11-04 14:58:03.414277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.719 [2024-11-04 14:58:03.414607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.978 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.237 malloc1 00:25:34.237 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.237 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:34.237 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.237 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.237 [2024-11-04 14:58:03.879568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:34.237 [2024-11-04 14:58:03.879980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.238 [2024-11-04 14:58:03.880027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:34.238 [2024-11-04 14:58:03.880044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.238 [2024-11-04 14:58:03.882991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.238 [2024-11-04 14:58:03.883210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:34.238 pt1 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.238 malloc2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.238 [2024-11-04 14:58:03.941326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:34.238 [2024-11-04 14:58:03.941656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.238 [2024-11-04 14:58:03.941701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:34.238 [2024-11-04 14:58:03.941717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.238 [2024-11-04 14:58:03.944639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.238 [2024-11-04 14:58:03.944818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:34.238 pt2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.238 [2024-11-04 14:58:03.953419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:34.238 [2024-11-04 14:58:03.956291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:34.238 [2024-11-04 14:58:03.956537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:34.238 [2024-11-04 14:58:03.956581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:34.238 [2024-11-04 14:58:03.956688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:34.238 [2024-11-04 14:58:03.956859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:34.238 [2024-11-04 14:58:03.956884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:34.238 [2024-11-04 14:58:03.957006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.238 14:58:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.238 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:34.238 "name": "raid_bdev1", 00:25:34.238 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:34.238 "strip_size_kb": 0, 00:25:34.238 "state": "online", 00:25:34.238 "raid_level": "raid1", 00:25:34.238 "superblock": true, 00:25:34.238 "num_base_bdevs": 2, 00:25:34.238 "num_base_bdevs_discovered": 2, 00:25:34.238 "num_base_bdevs_operational": 2, 00:25:34.238 "base_bdevs_list": [ 00:25:34.238 { 00:25:34.238 "name": "pt1", 00:25:34.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:34.238 "is_configured": true, 00:25:34.238 "data_offset": 256, 00:25:34.238 "data_size": 7936 00:25:34.238 }, 00:25:34.238 { 00:25:34.238 "name": "pt2", 00:25:34.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:34.238 "is_configured": true, 00:25:34.238 "data_offset": 256, 00:25:34.238 "data_size": 7936 00:25:34.238 } 00:25:34.238 ] 00:25:34.238 }' 00:25:34.238 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:34.238 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:34.856 [2024-11-04 14:58:04.482050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.856 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:34.856 "name": "raid_bdev1", 00:25:34.856 "aliases": [ 00:25:34.856 "f2fd2706-bf1a-430d-8329-788418edcec6" 00:25:34.856 ], 00:25:34.856 "product_name": "Raid Volume", 00:25:34.856 "block_size": 4096, 00:25:34.856 "num_blocks": 7936, 00:25:34.856 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:34.856 "md_size": 32, 00:25:34.856 "md_interleave": false, 00:25:34.856 "dif_type": 0, 00:25:34.856 "assigned_rate_limits": { 00:25:34.856 "rw_ios_per_sec": 0, 00:25:34.856 "rw_mbytes_per_sec": 0, 00:25:34.856 "r_mbytes_per_sec": 0, 00:25:34.856 "w_mbytes_per_sec": 0 00:25:34.856 }, 00:25:34.856 "claimed": false, 00:25:34.856 "zoned": false, 00:25:34.856 "supported_io_types": { 00:25:34.856 "read": true, 00:25:34.856 "write": true, 00:25:34.856 "unmap": false, 00:25:34.856 "flush": false, 00:25:34.856 "reset": true, 00:25:34.856 "nvme_admin": false, 00:25:34.856 "nvme_io": false, 00:25:34.856 "nvme_io_md": false, 00:25:34.856 "write_zeroes": true, 00:25:34.856 "zcopy": false, 00:25:34.856 "get_zone_info": false, 00:25:34.856 "zone_management": false, 00:25:34.856 "zone_append": false, 00:25:34.856 "compare": false, 00:25:34.856 "compare_and_write": false, 00:25:34.856 "abort": false, 00:25:34.856 "seek_hole": false, 00:25:34.856 "seek_data": false, 00:25:34.856 "copy": false, 00:25:34.856 "nvme_iov_md": false 00:25:34.856 }, 00:25:34.856 "memory_domains": [ 00:25:34.856 { 00:25:34.856 "dma_device_id": "system", 00:25:34.856 "dma_device_type": 1 00:25:34.856 }, 00:25:34.856 { 00:25:34.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.856 "dma_device_type": 2 00:25:34.856 }, 00:25:34.856 { 00:25:34.856 "dma_device_id": "system", 00:25:34.856 "dma_device_type": 1 00:25:34.856 }, 00:25:34.856 { 00:25:34.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.856 "dma_device_type": 2 00:25:34.856 } 00:25:34.856 ], 00:25:34.857 "driver_specific": { 00:25:34.857 "raid": { 00:25:34.857 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:34.857 "strip_size_kb": 0, 00:25:34.857 "state": "online", 00:25:34.857 "raid_level": "raid1", 00:25:34.857 "superblock": true, 00:25:34.857 "num_base_bdevs": 2, 00:25:34.857 "num_base_bdevs_discovered": 2, 00:25:34.857 "num_base_bdevs_operational": 2, 00:25:34.857 "base_bdevs_list": [ 00:25:34.857 { 00:25:34.857 "name": "pt1", 00:25:34.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:34.857 "is_configured": true, 00:25:34.857 "data_offset": 256, 00:25:34.857 "data_size": 7936 00:25:34.857 }, 00:25:34.857 { 00:25:34.857 "name": "pt2", 00:25:34.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:34.857 "is_configured": true, 00:25:34.857 "data_offset": 256, 00:25:34.857 "data_size": 7936 00:25:34.857 } 00:25:34.857 ] 00:25:34.857 } 00:25:34.857 } 00:25:34.857 }' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:34.857 pt2' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.857 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 [2024-11-04 14:58:04.749972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f2fd2706-bf1a-430d-8329-788418edcec6 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z f2fd2706-bf1a-430d-8329-788418edcec6 ']' 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 [2024-11-04 14:58:04.797645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.116 [2024-11-04 14:58:04.797682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.116 [2024-11-04 14:58:04.797805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.116 [2024-11-04 14:58:04.797916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.116 [2024-11-04 14:58:04.797936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 [2024-11-04 14:58:04.937668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:35.116 [2024-11-04 14:58:04.940307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:35.116 [2024-11-04 14:58:04.940411] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:35.116 [2024-11-04 14:58:04.940489] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:35.116 [2024-11-04 14:58:04.940515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.116 [2024-11-04 14:58:04.940529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:35.116 request: 00:25:35.116 { 00:25:35.116 "name": "raid_bdev1", 00:25:35.116 "raid_level": "raid1", 00:25:35.116 "base_bdevs": [ 00:25:35.116 "malloc1", 00:25:35.116 "malloc2" 00:25:35.116 ], 00:25:35.116 "superblock": false, 00:25:35.116 "method": "bdev_raid_create", 00:25:35.116 "req_id": 1 00:25:35.116 } 00:25:35.116 Got JSON-RPC error response 00:25:35.116 response: 00:25:35.116 { 00:25:35.116 "code": -17, 00:25:35.116 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:35.116 } 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:35.116 14:58:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.116 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:35.116 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:35.116 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:35.116 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.116 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.375 [2024-11-04 14:58:05.009657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:35.375 [2024-11-04 14:58:05.009853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.375 [2024-11-04 14:58:05.009921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:35.375 [2024-11-04 14:58:05.010168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.375 [2024-11-04 14:58:05.013111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.375 [2024-11-04 14:58:05.013315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:35.375 [2024-11-04 14:58:05.013387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:35.375 [2024-11-04 14:58:05.013476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:35.375 pt1 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.375 "name": "raid_bdev1", 00:25:35.375 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:35.375 "strip_size_kb": 0, 00:25:35.375 "state": "configuring", 00:25:35.375 "raid_level": "raid1", 00:25:35.375 "superblock": true, 00:25:35.375 "num_base_bdevs": 2, 00:25:35.375 "num_base_bdevs_discovered": 1, 00:25:35.375 "num_base_bdevs_operational": 2, 00:25:35.375 "base_bdevs_list": [ 00:25:35.375 { 00:25:35.375 "name": "pt1", 00:25:35.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.375 "is_configured": true, 00:25:35.375 "data_offset": 256, 00:25:35.375 "data_size": 7936 00:25:35.375 }, 00:25:35.375 { 00:25:35.375 "name": null, 00:25:35.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.375 "is_configured": false, 00:25:35.375 "data_offset": 256, 00:25:35.375 "data_size": 7936 00:25:35.375 } 00:25:35.375 ] 00:25:35.375 }' 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.375 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.942 [2024-11-04 14:58:05.541937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:35.942 [2024-11-04 14:58:05.542069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.942 [2024-11-04 14:58:05.542106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:35.942 [2024-11-04 14:58:05.542125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.942 [2024-11-04 14:58:05.542504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.942 [2024-11-04 14:58:05.542544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:35.942 [2024-11-04 14:58:05.542653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:35.942 [2024-11-04 14:58:05.542697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:35.942 [2024-11-04 14:58:05.542847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:35.942 [2024-11-04 14:58:05.542873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:35.942 [2024-11-04 14:58:05.542958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:35.942 [2024-11-04 14:58:05.543099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:35.942 [2024-11-04 14:58:05.543121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:35.942 [2024-11-04 14:58:05.543300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.942 pt2 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.942 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.942 "name": "raid_bdev1", 00:25:35.942 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:35.942 "strip_size_kb": 0, 00:25:35.942 "state": "online", 00:25:35.942 "raid_level": "raid1", 00:25:35.942 "superblock": true, 00:25:35.942 "num_base_bdevs": 2, 00:25:35.942 "num_base_bdevs_discovered": 2, 00:25:35.942 "num_base_bdevs_operational": 2, 00:25:35.942 "base_bdevs_list": [ 00:25:35.942 { 00:25:35.942 "name": "pt1", 00:25:35.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.942 "is_configured": true, 00:25:35.942 "data_offset": 256, 00:25:35.942 "data_size": 7936 00:25:35.942 }, 00:25:35.942 { 00:25:35.942 "name": "pt2", 00:25:35.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.942 "is_configured": true, 00:25:35.942 "data_offset": 256, 00:25:35.942 "data_size": 7936 00:25:35.942 } 00:25:35.942 ] 00:25:35.942 }' 00:25:35.943 14:58:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.943 14:58:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.201 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.201 [2024-11-04 14:58:06.078553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.459 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.459 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.459 "name": "raid_bdev1", 00:25:36.459 "aliases": [ 00:25:36.459 "f2fd2706-bf1a-430d-8329-788418edcec6" 00:25:36.459 ], 00:25:36.459 "product_name": "Raid Volume", 00:25:36.459 "block_size": 4096, 00:25:36.459 "num_blocks": 7936, 00:25:36.459 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:36.459 "md_size": 32, 00:25:36.459 "md_interleave": false, 00:25:36.459 "dif_type": 0, 00:25:36.459 "assigned_rate_limits": { 00:25:36.459 "rw_ios_per_sec": 0, 00:25:36.459 "rw_mbytes_per_sec": 0, 00:25:36.459 "r_mbytes_per_sec": 0, 00:25:36.459 "w_mbytes_per_sec": 0 00:25:36.459 }, 00:25:36.459 "claimed": false, 00:25:36.459 "zoned": false, 00:25:36.459 "supported_io_types": { 00:25:36.459 "read": true, 00:25:36.459 "write": true, 00:25:36.459 "unmap": false, 00:25:36.459 "flush": false, 00:25:36.459 "reset": true, 00:25:36.459 "nvme_admin": false, 00:25:36.460 "nvme_io": false, 00:25:36.460 "nvme_io_md": false, 00:25:36.460 "write_zeroes": true, 00:25:36.460 "zcopy": false, 00:25:36.460 "get_zone_info": false, 00:25:36.460 "zone_management": false, 00:25:36.460 "zone_append": false, 00:25:36.460 "compare": false, 00:25:36.460 "compare_and_write": false, 00:25:36.460 "abort": false, 00:25:36.460 "seek_hole": false, 00:25:36.460 "seek_data": false, 00:25:36.460 "copy": false, 00:25:36.460 "nvme_iov_md": false 00:25:36.460 }, 00:25:36.460 "memory_domains": [ 00:25:36.460 { 00:25:36.460 "dma_device_id": "system", 00:25:36.460 "dma_device_type": 1 00:25:36.460 }, 00:25:36.460 { 00:25:36.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.460 "dma_device_type": 2 00:25:36.460 }, 00:25:36.460 { 00:25:36.460 "dma_device_id": "system", 00:25:36.460 "dma_device_type": 1 00:25:36.460 }, 00:25:36.460 { 00:25:36.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.460 "dma_device_type": 2 00:25:36.460 } 00:25:36.460 ], 00:25:36.460 "driver_specific": { 00:25:36.460 "raid": { 00:25:36.460 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:36.460 "strip_size_kb": 0, 00:25:36.460 "state": "online", 00:25:36.460 "raid_level": "raid1", 00:25:36.460 "superblock": true, 00:25:36.460 "num_base_bdevs": 2, 00:25:36.460 "num_base_bdevs_discovered": 2, 00:25:36.460 "num_base_bdevs_operational": 2, 00:25:36.460 "base_bdevs_list": [ 00:25:36.460 { 00:25:36.460 "name": "pt1", 00:25:36.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.460 "is_configured": true, 00:25:36.460 "data_offset": 256, 00:25:36.460 "data_size": 7936 00:25:36.460 }, 00:25:36.460 { 00:25:36.460 "name": "pt2", 00:25:36.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.460 "is_configured": true, 00:25:36.460 "data_offset": 256, 00:25:36.460 "data_size": 7936 00:25:36.460 } 00:25:36.460 ] 00:25:36.460 } 00:25:36.460 } 00:25:36.460 }' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:36.460 pt2' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.460 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:36.460 [2024-11-04 14:58:06.338616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' f2fd2706-bf1a-430d-8329-788418edcec6 '!=' f2fd2706-bf1a-430d-8329-788418edcec6 ']' 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.719 [2024-11-04 14:58:06.394323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.719 "name": "raid_bdev1", 00:25:36.719 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:36.719 "strip_size_kb": 0, 00:25:36.719 "state": "online", 00:25:36.719 "raid_level": "raid1", 00:25:36.719 "superblock": true, 00:25:36.719 "num_base_bdevs": 2, 00:25:36.719 "num_base_bdevs_discovered": 1, 00:25:36.719 "num_base_bdevs_operational": 1, 00:25:36.719 "base_bdevs_list": [ 00:25:36.719 { 00:25:36.719 "name": null, 00:25:36.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.719 "is_configured": false, 00:25:36.719 "data_offset": 0, 00:25:36.719 "data_size": 7936 00:25:36.719 }, 00:25:36.719 { 00:25:36.719 "name": "pt2", 00:25:36.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.719 "is_configured": true, 00:25:36.719 "data_offset": 256, 00:25:36.719 "data_size": 7936 00:25:36.719 } 00:25:36.719 ] 00:25:36.719 }' 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.719 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.287 [2024-11-04 14:58:06.922477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.287 [2024-11-04 14:58:06.922546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.287 [2024-11-04 14:58:06.922729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.287 [2024-11-04 14:58:06.922824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.287 [2024-11-04 14:58:06.922849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.287 14:58:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.287 [2024-11-04 14:58:07.002456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:37.287 [2024-11-04 14:58:07.002565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.287 [2024-11-04 14:58:07.002597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:37.287 [2024-11-04 14:58:07.002631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.287 [2024-11-04 14:58:07.005774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.287 [2024-11-04 14:58:07.005826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:37.288 [2024-11-04 14:58:07.005930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:37.288 [2024-11-04 14:58:07.006014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.288 [2024-11-04 14:58:07.006148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:37.288 [2024-11-04 14:58:07.006171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:37.288 [2024-11-04 14:58:07.006288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:37.288 [2024-11-04 14:58:07.006447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:37.288 [2024-11-04 14:58:07.006472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:37.288 [2024-11-04 14:58:07.006656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.288 pt2 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.288 "name": "raid_bdev1", 00:25:37.288 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:37.288 "strip_size_kb": 0, 00:25:37.288 "state": "online", 00:25:37.288 "raid_level": "raid1", 00:25:37.288 "superblock": true, 00:25:37.288 "num_base_bdevs": 2, 00:25:37.288 "num_base_bdevs_discovered": 1, 00:25:37.288 "num_base_bdevs_operational": 1, 00:25:37.288 "base_bdevs_list": [ 00:25:37.288 { 00:25:37.288 "name": null, 00:25:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.288 "is_configured": false, 00:25:37.288 "data_offset": 256, 00:25:37.288 "data_size": 7936 00:25:37.288 }, 00:25:37.288 { 00:25:37.288 "name": "pt2", 00:25:37.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.288 "is_configured": true, 00:25:37.288 "data_offset": 256, 00:25:37.288 "data_size": 7936 00:25:37.288 } 00:25:37.288 ] 00:25:37.288 }' 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.288 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.853 [2024-11-04 14:58:07.526823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.853 [2024-11-04 14:58:07.527066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.853 [2024-11-04 14:58:07.527193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.853 [2024-11-04 14:58:07.527300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.853 [2024-11-04 14:58:07.527318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:37.853 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.854 [2024-11-04 14:58:07.594875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:37.854 [2024-11-04 14:58:07.594959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.854 [2024-11-04 14:58:07.594993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:37.854 [2024-11-04 14:58:07.595008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.854 [2024-11-04 14:58:07.597822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.854 [2024-11-04 14:58:07.597867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:37.854 [2024-11-04 14:58:07.597961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:37.854 [2024-11-04 14:58:07.598022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:37.854 [2024-11-04 14:58:07.598196] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:37.854 [2024-11-04 14:58:07.598213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.854 [2024-11-04 14:58:07.598238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:37.854 [2024-11-04 14:58:07.598331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.854 [2024-11-04 14:58:07.598425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:37.854 [2024-11-04 14:58:07.598440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:37.854 [2024-11-04 14:58:07.598525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:37.854 [2024-11-04 14:58:07.598654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:37.854 [2024-11-04 14:58:07.598671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:37.854 [2024-11-04 14:58:07.598856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.854 pt1 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.854 "name": "raid_bdev1", 00:25:37.854 "uuid": "f2fd2706-bf1a-430d-8329-788418edcec6", 00:25:37.854 "strip_size_kb": 0, 00:25:37.854 "state": "online", 00:25:37.854 "raid_level": "raid1", 00:25:37.854 "superblock": true, 00:25:37.854 "num_base_bdevs": 2, 00:25:37.854 "num_base_bdevs_discovered": 1, 00:25:37.854 "num_base_bdevs_operational": 1, 00:25:37.854 "base_bdevs_list": [ 00:25:37.854 { 00:25:37.854 "name": null, 00:25:37.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.854 "is_configured": false, 00:25:37.854 "data_offset": 256, 00:25:37.854 "data_size": 7936 00:25:37.854 }, 00:25:37.854 { 00:25:37.854 "name": "pt2", 00:25:37.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.854 "is_configured": true, 00:25:37.854 "data_offset": 256, 00:25:37.854 "data_size": 7936 00:25:37.854 } 00:25:37.854 ] 00:25:37.854 }' 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.854 14:58:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:38.420 [2024-11-04 14:58:08.175378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' f2fd2706-bf1a-430d-8329-788418edcec6 '!=' f2fd2706-bf1a-430d-8329-788418edcec6 ']' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88039 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88039 ']' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 88039 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88039 00:25:38.420 killing process with pid 88039 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88039' 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 88039 00:25:38.420 [2024-11-04 14:58:08.250253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.420 14:58:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 88039 00:25:38.420 [2024-11-04 14:58:08.250416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.420 [2024-11-04 14:58:08.250488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.420 [2024-11-04 14:58:08.250513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:38.678 [2024-11-04 14:58:08.459307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.052 14:58:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:25:40.052 00:25:40.052 real 0m6.826s 00:25:40.052 user 0m10.674s 00:25:40.052 sys 0m1.086s 00:25:40.052 14:58:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:40.052 ************************************ 00:25:40.052 END TEST raid_superblock_test_md_separate 00:25:40.052 ************************************ 00:25:40.052 14:58:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:40.052 14:58:09 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:25:40.052 14:58:09 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:25:40.052 14:58:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:40.052 14:58:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:40.052 14:58:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.052 ************************************ 00:25:40.052 START TEST raid_rebuild_test_sb_md_separate 00:25:40.052 ************************************ 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:40.052 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88368 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88368 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88368 ']' 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:40.053 14:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:40.053 [2024-11-04 14:58:09.729220] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:25:40.053 [2024-11-04 14:58:09.729648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88368 ] 00:25:40.053 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:40.053 Zero copy mechanism will not be used. 00:25:40.053 [2024-11-04 14:58:09.917852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.311 [2024-11-04 14:58:10.064734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.569 [2024-11-04 14:58:10.274567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.569 [2024-11-04 14:58:10.274689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 BaseBdev1_malloc 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 [2024-11-04 14:58:10.784185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:41.136 [2024-11-04 14:58:10.784297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.136 [2024-11-04 14:58:10.784331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:41.136 [2024-11-04 14:58:10.784349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.136 [2024-11-04 14:58:10.786855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.136 [2024-11-04 14:58:10.786896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:41.136 BaseBdev1 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 BaseBdev2_malloc 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 [2024-11-04 14:58:10.841414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:41.136 [2024-11-04 14:58:10.841827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.136 [2024-11-04 14:58:10.841915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:41.136 [2024-11-04 14:58:10.841936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.136 [2024-11-04 14:58:10.844533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.136 [2024-11-04 14:58:10.844575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.136 BaseBdev2 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 spare_malloc 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 spare_delay 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 [2024-11-04 14:58:10.910181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:41.136 [2024-11-04 14:58:10.910622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.136 [2024-11-04 14:58:10.910678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:41.136 [2024-11-04 14:58:10.910696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.136 [2024-11-04 14:58:10.913172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.136 [2024-11-04 14:58:10.913214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:41.136 spare 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 [2024-11-04 14:58:10.922297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.136 [2024-11-04 14:58:10.924715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.136 [2024-11-04 14:58:10.924946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:41.136 [2024-11-04 14:58:10.924968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:41.136 [2024-11-04 14:58:10.925059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:41.136 [2024-11-04 14:58:10.925250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:41.136 [2024-11-04 14:58:10.925286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:41.136 [2024-11-04 14:58:10.925406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.136 "name": "raid_bdev1", 00:25:41.136 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:41.136 "strip_size_kb": 0, 00:25:41.136 "state": "online", 00:25:41.136 "raid_level": "raid1", 00:25:41.136 "superblock": true, 00:25:41.136 "num_base_bdevs": 2, 00:25:41.136 "num_base_bdevs_discovered": 2, 00:25:41.136 "num_base_bdevs_operational": 2, 00:25:41.136 "base_bdevs_list": [ 00:25:41.136 { 00:25:41.136 "name": "BaseBdev1", 00:25:41.136 "uuid": "240d6591-196c-559d-b439-c766a744dd3c", 00:25:41.136 "is_configured": true, 00:25:41.136 "data_offset": 256, 00:25:41.136 "data_size": 7936 00:25:41.136 }, 00:25:41.136 { 00:25:41.136 "name": "BaseBdev2", 00:25:41.136 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:41.136 "is_configured": true, 00:25:41.136 "data_offset": 256, 00:25:41.136 "data_size": 7936 00:25:41.136 } 00:25:41.136 ] 00:25:41.136 }' 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.136 14:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.703 [2024-11-04 14:58:11.446848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:41.703 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:41.704 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:41.962 [2024-11-04 14:58:11.782763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:41.962 /dev/nbd0 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.962 1+0 records in 00:25:41.962 1+0 records out 00:25:41.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402137 s, 10.2 MB/s 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:41.962 14:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:43.337 7936+0 records in 00:25:43.337 7936+0 records out 00:25:43.337 32505856 bytes (33 MB, 31 MiB) copied, 0.940488 s, 34.6 MB/s 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.337 14:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:43.337 [2024-11-04 14:58:13.087842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.337 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:43.338 [2024-11-04 14:58:13.095940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:43.338 "name": "raid_bdev1", 00:25:43.338 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:43.338 "strip_size_kb": 0, 00:25:43.338 "state": "online", 00:25:43.338 "raid_level": "raid1", 00:25:43.338 "superblock": true, 00:25:43.338 "num_base_bdevs": 2, 00:25:43.338 "num_base_bdevs_discovered": 1, 00:25:43.338 "num_base_bdevs_operational": 1, 00:25:43.338 "base_bdevs_list": [ 00:25:43.338 { 00:25:43.338 "name": null, 00:25:43.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.338 "is_configured": false, 00:25:43.338 "data_offset": 0, 00:25:43.338 "data_size": 7936 00:25:43.338 }, 00:25:43.338 { 00:25:43.338 "name": "BaseBdev2", 00:25:43.338 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:43.338 "is_configured": true, 00:25:43.338 "data_offset": 256, 00:25:43.338 "data_size": 7936 00:25:43.338 } 00:25:43.338 ] 00:25:43.338 }' 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:43.338 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:43.904 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:43.904 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.904 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:43.904 [2024-11-04 14:58:13.564105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:43.904 [2024-11-04 14:58:13.577589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:25:43.904 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.904 14:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:43.904 [2024-11-04 14:58:13.580398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.838 "name": "raid_bdev1", 00:25:44.838 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:44.838 "strip_size_kb": 0, 00:25:44.838 "state": "online", 00:25:44.838 "raid_level": "raid1", 00:25:44.838 "superblock": true, 00:25:44.838 "num_base_bdevs": 2, 00:25:44.838 "num_base_bdevs_discovered": 2, 00:25:44.838 "num_base_bdevs_operational": 2, 00:25:44.838 "process": { 00:25:44.838 "type": "rebuild", 00:25:44.838 "target": "spare", 00:25:44.838 "progress": { 00:25:44.838 "blocks": 2560, 00:25:44.838 "percent": 32 00:25:44.838 } 00:25:44.838 }, 00:25:44.838 "base_bdevs_list": [ 00:25:44.838 { 00:25:44.838 "name": "spare", 00:25:44.838 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:44.838 "is_configured": true, 00:25:44.838 "data_offset": 256, 00:25:44.838 "data_size": 7936 00:25:44.838 }, 00:25:44.838 { 00:25:44.838 "name": "BaseBdev2", 00:25:44.838 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:44.838 "is_configured": true, 00:25:44.838 "data_offset": 256, 00:25:44.838 "data_size": 7936 00:25:44.838 } 00:25:44.838 ] 00:25:44.838 }' 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.838 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:45.096 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.096 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:45.096 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.096 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:45.096 [2024-11-04 14:58:14.746712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:45.096 [2024-11-04 14:58:14.792843] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:45.096 [2024-11-04 14:58:14.792958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.096 [2024-11-04 14:58:14.792997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:45.096 [2024-11-04 14:58:14.793016] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:45.096 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.097 "name": "raid_bdev1", 00:25:45.097 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:45.097 "strip_size_kb": 0, 00:25:45.097 "state": "online", 00:25:45.097 "raid_level": "raid1", 00:25:45.097 "superblock": true, 00:25:45.097 "num_base_bdevs": 2, 00:25:45.097 "num_base_bdevs_discovered": 1, 00:25:45.097 "num_base_bdevs_operational": 1, 00:25:45.097 "base_bdevs_list": [ 00:25:45.097 { 00:25:45.097 "name": null, 00:25:45.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.097 "is_configured": false, 00:25:45.097 "data_offset": 0, 00:25:45.097 "data_size": 7936 00:25:45.097 }, 00:25:45.097 { 00:25:45.097 "name": "BaseBdev2", 00:25:45.097 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:45.097 "is_configured": true, 00:25:45.097 "data_offset": 256, 00:25:45.097 "data_size": 7936 00:25:45.097 } 00:25:45.097 ] 00:25:45.097 }' 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.097 14:58:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:45.666 "name": "raid_bdev1", 00:25:45.666 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:45.666 "strip_size_kb": 0, 00:25:45.666 "state": "online", 00:25:45.666 "raid_level": "raid1", 00:25:45.666 "superblock": true, 00:25:45.666 "num_base_bdevs": 2, 00:25:45.666 "num_base_bdevs_discovered": 1, 00:25:45.666 "num_base_bdevs_operational": 1, 00:25:45.666 "base_bdevs_list": [ 00:25:45.666 { 00:25:45.666 "name": null, 00:25:45.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.666 "is_configured": false, 00:25:45.666 "data_offset": 0, 00:25:45.666 "data_size": 7936 00:25:45.666 }, 00:25:45.666 { 00:25:45.666 "name": "BaseBdev2", 00:25:45.666 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:45.666 "is_configured": true, 00:25:45.666 "data_offset": 256, 00:25:45.666 "data_size": 7936 00:25:45.666 } 00:25:45.666 ] 00:25:45.666 }' 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:45.666 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:45.667 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:45.667 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.667 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:45.667 [2024-11-04 14:58:15.515669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:45.667 [2024-11-04 14:58:15.529824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:25:45.667 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.667 14:58:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:45.667 [2024-11-04 14:58:15.532725] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:47.041 "name": "raid_bdev1", 00:25:47.041 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:47.041 "strip_size_kb": 0, 00:25:47.041 "state": "online", 00:25:47.041 "raid_level": "raid1", 00:25:47.041 "superblock": true, 00:25:47.041 "num_base_bdevs": 2, 00:25:47.041 "num_base_bdevs_discovered": 2, 00:25:47.041 "num_base_bdevs_operational": 2, 00:25:47.041 "process": { 00:25:47.041 "type": "rebuild", 00:25:47.041 "target": "spare", 00:25:47.041 "progress": { 00:25:47.041 "blocks": 2560, 00:25:47.041 "percent": 32 00:25:47.041 } 00:25:47.041 }, 00:25:47.041 "base_bdevs_list": [ 00:25:47.041 { 00:25:47.041 "name": "spare", 00:25:47.041 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:47.041 "is_configured": true, 00:25:47.041 "data_offset": 256, 00:25:47.041 "data_size": 7936 00:25:47.041 }, 00:25:47.041 { 00:25:47.041 "name": "BaseBdev2", 00:25:47.041 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:47.041 "is_configured": true, 00:25:47.041 "data_offset": 256, 00:25:47.041 "data_size": 7936 00:25:47.041 } 00:25:47.041 ] 00:25:47.041 }' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:47.041 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=778 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.041 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:47.041 "name": "raid_bdev1", 00:25:47.041 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:47.041 "strip_size_kb": 0, 00:25:47.041 "state": "online", 00:25:47.041 "raid_level": "raid1", 00:25:47.041 "superblock": true, 00:25:47.041 "num_base_bdevs": 2, 00:25:47.041 "num_base_bdevs_discovered": 2, 00:25:47.041 "num_base_bdevs_operational": 2, 00:25:47.041 "process": { 00:25:47.041 "type": "rebuild", 00:25:47.041 "target": "spare", 00:25:47.041 "progress": { 00:25:47.041 "blocks": 2816, 00:25:47.041 "percent": 35 00:25:47.041 } 00:25:47.041 }, 00:25:47.041 "base_bdevs_list": [ 00:25:47.041 { 00:25:47.041 "name": "spare", 00:25:47.041 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:47.041 "is_configured": true, 00:25:47.041 "data_offset": 256, 00:25:47.041 "data_size": 7936 00:25:47.041 }, 00:25:47.041 { 00:25:47.041 "name": "BaseBdev2", 00:25:47.041 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:47.041 "is_configured": true, 00:25:47.041 "data_offset": 256, 00:25:47.041 "data_size": 7936 00:25:47.041 } 00:25:47.042 ] 00:25:47.042 }' 00:25:47.042 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:47.042 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.042 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:47.042 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.042 14:58:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.415 "name": "raid_bdev1", 00:25:48.415 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:48.415 "strip_size_kb": 0, 00:25:48.415 "state": "online", 00:25:48.415 "raid_level": "raid1", 00:25:48.415 "superblock": true, 00:25:48.415 "num_base_bdevs": 2, 00:25:48.415 "num_base_bdevs_discovered": 2, 00:25:48.415 "num_base_bdevs_operational": 2, 00:25:48.415 "process": { 00:25:48.415 "type": "rebuild", 00:25:48.415 "target": "spare", 00:25:48.415 "progress": { 00:25:48.415 "blocks": 5888, 00:25:48.415 "percent": 74 00:25:48.415 } 00:25:48.415 }, 00:25:48.415 "base_bdevs_list": [ 00:25:48.415 { 00:25:48.415 "name": "spare", 00:25:48.415 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:48.415 "is_configured": true, 00:25:48.415 "data_offset": 256, 00:25:48.415 "data_size": 7936 00:25:48.415 }, 00:25:48.415 { 00:25:48.415 "name": "BaseBdev2", 00:25:48.415 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:48.415 "is_configured": true, 00:25:48.415 "data_offset": 256, 00:25:48.415 "data_size": 7936 00:25:48.415 } 00:25:48.415 ] 00:25:48.415 }' 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.415 14:58:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.415 14:58:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.415 14:58:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:48.982 [2024-11-04 14:58:18.661876] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:48.982 [2024-11-04 14:58:18.661977] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:48.982 [2024-11-04 14:58:18.662177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:49.240 "name": "raid_bdev1", 00:25:49.240 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:49.240 "strip_size_kb": 0, 00:25:49.240 "state": "online", 00:25:49.240 "raid_level": "raid1", 00:25:49.240 "superblock": true, 00:25:49.240 "num_base_bdevs": 2, 00:25:49.240 "num_base_bdevs_discovered": 2, 00:25:49.240 "num_base_bdevs_operational": 2, 00:25:49.240 "base_bdevs_list": [ 00:25:49.240 { 00:25:49.240 "name": "spare", 00:25:49.240 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:49.240 "is_configured": true, 00:25:49.240 "data_offset": 256, 00:25:49.240 "data_size": 7936 00:25:49.240 }, 00:25:49.240 { 00:25:49.240 "name": "BaseBdev2", 00:25:49.240 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:49.240 "is_configured": true, 00:25:49.240 "data_offset": 256, 00:25:49.240 "data_size": 7936 00:25:49.240 } 00:25:49.240 ] 00:25:49.240 }' 00:25:49.240 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:49.499 "name": "raid_bdev1", 00:25:49.499 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:49.499 "strip_size_kb": 0, 00:25:49.499 "state": "online", 00:25:49.499 "raid_level": "raid1", 00:25:49.499 "superblock": true, 00:25:49.499 "num_base_bdevs": 2, 00:25:49.499 "num_base_bdevs_discovered": 2, 00:25:49.499 "num_base_bdevs_operational": 2, 00:25:49.499 "base_bdevs_list": [ 00:25:49.499 { 00:25:49.499 "name": "spare", 00:25:49.499 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:49.499 "is_configured": true, 00:25:49.499 "data_offset": 256, 00:25:49.499 "data_size": 7936 00:25:49.499 }, 00:25:49.499 { 00:25:49.499 "name": "BaseBdev2", 00:25:49.499 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:49.499 "is_configured": true, 00:25:49.499 "data_offset": 256, 00:25:49.499 "data_size": 7936 00:25:49.499 } 00:25:49.499 ] 00:25:49.499 }' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.499 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.757 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.757 "name": "raid_bdev1", 00:25:49.757 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:49.757 "strip_size_kb": 0, 00:25:49.757 "state": "online", 00:25:49.757 "raid_level": "raid1", 00:25:49.757 "superblock": true, 00:25:49.757 "num_base_bdevs": 2, 00:25:49.757 "num_base_bdevs_discovered": 2, 00:25:49.757 "num_base_bdevs_operational": 2, 00:25:49.757 "base_bdevs_list": [ 00:25:49.757 { 00:25:49.757 "name": "spare", 00:25:49.757 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:49.757 "is_configured": true, 00:25:49.757 "data_offset": 256, 00:25:49.757 "data_size": 7936 00:25:49.757 }, 00:25:49.757 { 00:25:49.757 "name": "BaseBdev2", 00:25:49.757 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:49.757 "is_configured": true, 00:25:49.757 "data_offset": 256, 00:25:49.757 "data_size": 7936 00:25:49.757 } 00:25:49.757 ] 00:25:49.757 }' 00:25:49.757 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.757 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:50.014 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:50.014 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.014 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:50.014 [2024-11-04 14:58:19.899508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:50.014 [2024-11-04 14:58:19.899744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:50.014 [2024-11-04 14:58:19.899971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.015 [2024-11-04 14:58:19.900189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.015 [2024-11-04 14:58:19.900216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:50.015 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:50.272 14:58:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:50.531 /dev/nbd0 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:50.531 1+0 records in 00:25:50.531 1+0 records out 00:25:50.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549486 s, 7.5 MB/s 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:50.531 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:50.790 /dev/nbd1 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:50.790 1+0 records in 00:25:50.790 1+0 records out 00:25:50.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031749 s, 12.9 MB/s 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:50.790 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:51.048 14:58:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:51.306 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 [2024-11-04 14:58:21.301942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:51.565 [2024-11-04 14:58:21.302058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.565 [2024-11-04 14:58:21.302095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:51.565 [2024-11-04 14:58:21.302120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.565 [2024-11-04 14:58:21.304954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.565 [2024-11-04 14:58:21.304995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:51.565 [2024-11-04 14:58:21.305092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:51.565 [2024-11-04 14:58:21.305163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:51.565 [2024-11-04 14:58:21.305388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:51.565 spare 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 [2024-11-04 14:58:21.405491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:51.565 [2024-11-04 14:58:21.405524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:51.565 [2024-11-04 14:58:21.405679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:25:51.565 [2024-11-04 14:58:21.405869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:51.565 [2024-11-04 14:58:21.405885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:51.565 [2024-11-04 14:58:21.406068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.823 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.823 "name": "raid_bdev1", 00:25:51.823 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:51.823 "strip_size_kb": 0, 00:25:51.823 "state": "online", 00:25:51.823 "raid_level": "raid1", 00:25:51.823 "superblock": true, 00:25:51.823 "num_base_bdevs": 2, 00:25:51.823 "num_base_bdevs_discovered": 2, 00:25:51.823 "num_base_bdevs_operational": 2, 00:25:51.823 "base_bdevs_list": [ 00:25:51.823 { 00:25:51.823 "name": "spare", 00:25:51.823 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:51.823 "is_configured": true, 00:25:51.823 "data_offset": 256, 00:25:51.823 "data_size": 7936 00:25:51.823 }, 00:25:51.823 { 00:25:51.823 "name": "BaseBdev2", 00:25:51.823 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:51.823 "is_configured": true, 00:25:51.823 "data_offset": 256, 00:25:51.823 "data_size": 7936 00:25:51.823 } 00:25:51.823 ] 00:25:51.823 }' 00:25:51.823 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.823 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.081 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.339 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:52.339 "name": "raid_bdev1", 00:25:52.339 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:52.339 "strip_size_kb": 0, 00:25:52.339 "state": "online", 00:25:52.339 "raid_level": "raid1", 00:25:52.339 "superblock": true, 00:25:52.339 "num_base_bdevs": 2, 00:25:52.339 "num_base_bdevs_discovered": 2, 00:25:52.339 "num_base_bdevs_operational": 2, 00:25:52.339 "base_bdevs_list": [ 00:25:52.339 { 00:25:52.340 "name": "spare", 00:25:52.340 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:52.340 "is_configured": true, 00:25:52.340 "data_offset": 256, 00:25:52.340 "data_size": 7936 00:25:52.340 }, 00:25:52.340 { 00:25:52.340 "name": "BaseBdev2", 00:25:52.340 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:52.340 "is_configured": true, 00:25:52.340 "data_offset": 256, 00:25:52.340 "data_size": 7936 00:25:52.340 } 00:25:52.340 ] 00:25:52.340 }' 00:25:52.340 14:58:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.340 [2024-11-04 14:58:22.138454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.340 "name": "raid_bdev1", 00:25:52.340 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:52.340 "strip_size_kb": 0, 00:25:52.340 "state": "online", 00:25:52.340 "raid_level": "raid1", 00:25:52.340 "superblock": true, 00:25:52.340 "num_base_bdevs": 2, 00:25:52.340 "num_base_bdevs_discovered": 1, 00:25:52.340 "num_base_bdevs_operational": 1, 00:25:52.340 "base_bdevs_list": [ 00:25:52.340 { 00:25:52.340 "name": null, 00:25:52.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.340 "is_configured": false, 00:25:52.340 "data_offset": 0, 00:25:52.340 "data_size": 7936 00:25:52.340 }, 00:25:52.340 { 00:25:52.340 "name": "BaseBdev2", 00:25:52.340 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:52.340 "is_configured": true, 00:25:52.340 "data_offset": 256, 00:25:52.340 "data_size": 7936 00:25:52.340 } 00:25:52.340 ] 00:25:52.340 }' 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.340 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.906 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:52.906 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.906 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:52.906 [2024-11-04 14:58:22.666752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:52.906 [2024-11-04 14:58:22.667040] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:52.906 [2024-11-04 14:58:22.667068] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:52.906 [2024-11-04 14:58:22.667152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:52.906 [2024-11-04 14:58:22.680613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:25:52.906 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.906 14:58:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:52.906 [2024-11-04 14:58:22.683343] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.839 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:54.097 "name": "raid_bdev1", 00:25:54.097 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:54.097 "strip_size_kb": 0, 00:25:54.097 "state": "online", 00:25:54.097 "raid_level": "raid1", 00:25:54.097 "superblock": true, 00:25:54.097 "num_base_bdevs": 2, 00:25:54.097 "num_base_bdevs_discovered": 2, 00:25:54.097 "num_base_bdevs_operational": 2, 00:25:54.097 "process": { 00:25:54.097 "type": "rebuild", 00:25:54.097 "target": "spare", 00:25:54.097 "progress": { 00:25:54.097 "blocks": 2560, 00:25:54.097 "percent": 32 00:25:54.097 } 00:25:54.097 }, 00:25:54.097 "base_bdevs_list": [ 00:25:54.097 { 00:25:54.097 "name": "spare", 00:25:54.097 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:54.097 "is_configured": true, 00:25:54.097 "data_offset": 256, 00:25:54.097 "data_size": 7936 00:25:54.097 }, 00:25:54.097 { 00:25:54.097 "name": "BaseBdev2", 00:25:54.097 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:54.097 "is_configured": true, 00:25:54.097 "data_offset": 256, 00:25:54.097 "data_size": 7936 00:25:54.097 } 00:25:54.097 ] 00:25:54.097 }' 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:54.097 [2024-11-04 14:58:23.842381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:54.097 [2024-11-04 14:58:23.895554] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:54.097 [2024-11-04 14:58:23.896048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.097 [2024-11-04 14:58:23.896079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:54.097 [2024-11-04 14:58:23.896113] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.097 "name": "raid_bdev1", 00:25:54.097 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:54.097 "strip_size_kb": 0, 00:25:54.097 "state": "online", 00:25:54.097 "raid_level": "raid1", 00:25:54.097 "superblock": true, 00:25:54.097 "num_base_bdevs": 2, 00:25:54.097 "num_base_bdevs_discovered": 1, 00:25:54.097 "num_base_bdevs_operational": 1, 00:25:54.097 "base_bdevs_list": [ 00:25:54.097 { 00:25:54.097 "name": null, 00:25:54.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.097 "is_configured": false, 00:25:54.097 "data_offset": 0, 00:25:54.097 "data_size": 7936 00:25:54.097 }, 00:25:54.097 { 00:25:54.097 "name": "BaseBdev2", 00:25:54.097 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:54.097 "is_configured": true, 00:25:54.097 "data_offset": 256, 00:25:54.097 "data_size": 7936 00:25:54.097 } 00:25:54.097 ] 00:25:54.097 }' 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.097 14:58:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:54.663 14:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:54.663 14:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.663 14:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:54.663 [2024-11-04 14:58:24.451041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:54.663 [2024-11-04 14:58:24.451149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.663 [2024-11-04 14:58:24.451192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:54.663 [2024-11-04 14:58:24.451211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.663 [2024-11-04 14:58:24.451627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.663 [2024-11-04 14:58:24.451674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:54.663 [2024-11-04 14:58:24.451780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:54.663 [2024-11-04 14:58:24.451804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:54.663 [2024-11-04 14:58:24.451819] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:54.663 [2024-11-04 14:58:24.451856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:54.663 [2024-11-04 14:58:24.464300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:25:54.663 spare 00:25:54.663 14:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.663 14:58:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:54.663 [2024-11-04 14:58:24.467056] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.597 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:55.856 "name": "raid_bdev1", 00:25:55.856 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:55.856 "strip_size_kb": 0, 00:25:55.856 "state": "online", 00:25:55.856 "raid_level": "raid1", 00:25:55.856 "superblock": true, 00:25:55.856 "num_base_bdevs": 2, 00:25:55.856 "num_base_bdevs_discovered": 2, 00:25:55.856 "num_base_bdevs_operational": 2, 00:25:55.856 "process": { 00:25:55.856 "type": "rebuild", 00:25:55.856 "target": "spare", 00:25:55.856 "progress": { 00:25:55.856 "blocks": 2560, 00:25:55.856 "percent": 32 00:25:55.856 } 00:25:55.856 }, 00:25:55.856 "base_bdevs_list": [ 00:25:55.856 { 00:25:55.856 "name": "spare", 00:25:55.856 "uuid": "62b06945-5049-5265-8f8e-63ddcc39fe33", 00:25:55.856 "is_configured": true, 00:25:55.856 "data_offset": 256, 00:25:55.856 "data_size": 7936 00:25:55.856 }, 00:25:55.856 { 00:25:55.856 "name": "BaseBdev2", 00:25:55.856 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:55.856 "is_configured": true, 00:25:55.856 "data_offset": 256, 00:25:55.856 "data_size": 7936 00:25:55.856 } 00:25:55.856 ] 00:25:55.856 }' 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:55.856 [2024-11-04 14:58:25.638348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:55.856 [2024-11-04 14:58:25.679593] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:55.856 [2024-11-04 14:58:25.679928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.856 [2024-11-04 14:58:25.680141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:55.856 [2024-11-04 14:58:25.680316] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.856 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.119 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.119 "name": "raid_bdev1", 00:25:56.119 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:56.119 "strip_size_kb": 0, 00:25:56.119 "state": "online", 00:25:56.119 "raid_level": "raid1", 00:25:56.119 "superblock": true, 00:25:56.119 "num_base_bdevs": 2, 00:25:56.119 "num_base_bdevs_discovered": 1, 00:25:56.119 "num_base_bdevs_operational": 1, 00:25:56.119 "base_bdevs_list": [ 00:25:56.119 { 00:25:56.119 "name": null, 00:25:56.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.119 "is_configured": false, 00:25:56.119 "data_offset": 0, 00:25:56.119 "data_size": 7936 00:25:56.119 }, 00:25:56.119 { 00:25:56.119 "name": "BaseBdev2", 00:25:56.119 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:56.119 "is_configured": true, 00:25:56.119 "data_offset": 256, 00:25:56.119 "data_size": 7936 00:25:56.119 } 00:25:56.119 ] 00:25:56.119 }' 00:25:56.119 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.119 14:58:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:56.377 "name": "raid_bdev1", 00:25:56.377 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:56.377 "strip_size_kb": 0, 00:25:56.377 "state": "online", 00:25:56.377 "raid_level": "raid1", 00:25:56.377 "superblock": true, 00:25:56.377 "num_base_bdevs": 2, 00:25:56.377 "num_base_bdevs_discovered": 1, 00:25:56.377 "num_base_bdevs_operational": 1, 00:25:56.377 "base_bdevs_list": [ 00:25:56.377 { 00:25:56.377 "name": null, 00:25:56.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.377 "is_configured": false, 00:25:56.377 "data_offset": 0, 00:25:56.377 "data_size": 7936 00:25:56.377 }, 00:25:56.377 { 00:25:56.377 "name": "BaseBdev2", 00:25:56.377 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:56.377 "is_configured": true, 00:25:56.377 "data_offset": 256, 00:25:56.377 "data_size": 7936 00:25:56.377 } 00:25:56.377 ] 00:25:56.377 }' 00:25:56.377 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:56.636 [2024-11-04 14:58:26.360578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:56.636 [2024-11-04 14:58:26.360662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.636 [2024-11-04 14:58:26.360708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:56.636 [2024-11-04 14:58:26.360741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.636 [2024-11-04 14:58:26.361060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.636 [2024-11-04 14:58:26.361091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:56.636 [2024-11-04 14:58:26.361167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:56.636 [2024-11-04 14:58:26.361191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:56.636 [2024-11-04 14:58:26.361205] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:56.636 [2024-11-04 14:58:26.361218] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:56.636 BaseBdev1 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.636 14:58:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.572 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.572 "name": "raid_bdev1", 00:25:57.572 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:57.572 "strip_size_kb": 0, 00:25:57.572 "state": "online", 00:25:57.572 "raid_level": "raid1", 00:25:57.572 "superblock": true, 00:25:57.572 "num_base_bdevs": 2, 00:25:57.572 "num_base_bdevs_discovered": 1, 00:25:57.572 "num_base_bdevs_operational": 1, 00:25:57.572 "base_bdevs_list": [ 00:25:57.572 { 00:25:57.572 "name": null, 00:25:57.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.572 "is_configured": false, 00:25:57.572 "data_offset": 0, 00:25:57.572 "data_size": 7936 00:25:57.572 }, 00:25:57.572 { 00:25:57.572 "name": "BaseBdev2", 00:25:57.572 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:57.572 "is_configured": true, 00:25:57.572 "data_offset": 256, 00:25:57.572 "data_size": 7936 00:25:57.572 } 00:25:57.572 ] 00:25:57.572 }' 00:25:57.573 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.573 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:58.140 "name": "raid_bdev1", 00:25:58.140 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:58.140 "strip_size_kb": 0, 00:25:58.140 "state": "online", 00:25:58.140 "raid_level": "raid1", 00:25:58.140 "superblock": true, 00:25:58.140 "num_base_bdevs": 2, 00:25:58.140 "num_base_bdevs_discovered": 1, 00:25:58.140 "num_base_bdevs_operational": 1, 00:25:58.140 "base_bdevs_list": [ 00:25:58.140 { 00:25:58.140 "name": null, 00:25:58.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.140 "is_configured": false, 00:25:58.140 "data_offset": 0, 00:25:58.140 "data_size": 7936 00:25:58.140 }, 00:25:58.140 { 00:25:58.140 "name": "BaseBdev2", 00:25:58.140 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:58.140 "is_configured": true, 00:25:58.140 "data_offset": 256, 00:25:58.140 "data_size": 7936 00:25:58.140 } 00:25:58.140 ] 00:25:58.140 }' 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:58.140 14:58:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:58.404 [2024-11-04 14:58:28.041119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:58.404 [2024-11-04 14:58:28.041365] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:58.404 [2024-11-04 14:58:28.041390] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:58.404 request: 00:25:58.404 { 00:25:58.404 "base_bdev": "BaseBdev1", 00:25:58.404 "raid_bdev": "raid_bdev1", 00:25:58.404 "method": "bdev_raid_add_base_bdev", 00:25:58.404 "req_id": 1 00:25:58.404 } 00:25:58.404 Got JSON-RPC error response 00:25:58.404 response: 00:25:58.404 { 00:25:58.404 "code": -22, 00:25:58.404 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:58.404 } 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.404 14:58:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.342 "name": "raid_bdev1", 00:25:59.342 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:59.342 "strip_size_kb": 0, 00:25:59.342 "state": "online", 00:25:59.342 "raid_level": "raid1", 00:25:59.342 "superblock": true, 00:25:59.342 "num_base_bdevs": 2, 00:25:59.342 "num_base_bdevs_discovered": 1, 00:25:59.342 "num_base_bdevs_operational": 1, 00:25:59.342 "base_bdevs_list": [ 00:25:59.342 { 00:25:59.342 "name": null, 00:25:59.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.342 "is_configured": false, 00:25:59.342 "data_offset": 0, 00:25:59.342 "data_size": 7936 00:25:59.342 }, 00:25:59.342 { 00:25:59.342 "name": "BaseBdev2", 00:25:59.342 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:59.342 "is_configured": true, 00:25:59.342 "data_offset": 256, 00:25:59.342 "data_size": 7936 00:25:59.342 } 00:25:59.342 ] 00:25:59.342 }' 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.342 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.910 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:59.910 "name": "raid_bdev1", 00:25:59.910 "uuid": "09b2b254-0797-4b7c-9248-6344aea64970", 00:25:59.910 "strip_size_kb": 0, 00:25:59.910 "state": "online", 00:25:59.910 "raid_level": "raid1", 00:25:59.910 "superblock": true, 00:25:59.910 "num_base_bdevs": 2, 00:25:59.910 "num_base_bdevs_discovered": 1, 00:25:59.910 "num_base_bdevs_operational": 1, 00:25:59.910 "base_bdevs_list": [ 00:25:59.910 { 00:25:59.910 "name": null, 00:25:59.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.910 "is_configured": false, 00:25:59.910 "data_offset": 0, 00:25:59.910 "data_size": 7936 00:25:59.910 }, 00:25:59.910 { 00:25:59.911 "name": "BaseBdev2", 00:25:59.911 "uuid": "acfe2750-94c2-54fc-8b9a-2f4c1f23f6a5", 00:25:59.911 "is_configured": true, 00:25:59.911 "data_offset": 256, 00:25:59.911 "data_size": 7936 00:25:59.911 } 00:25:59.911 ] 00:25:59.911 }' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88368 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88368 ']' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88368 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88368 00:25:59.911 killing process with pid 88368 00:25:59.911 Received shutdown signal, test time was about 60.000000 seconds 00:25:59.911 00:25:59.911 Latency(us) 00:25:59.911 [2024-11-04T14:58:29.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.911 [2024-11-04T14:58:29.803Z] =================================================================================================================== 00:25:59.911 [2024-11-04T14:58:29.803Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88368' 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88368 00:25:59.911 [2024-11-04 14:58:29.778334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.911 14:58:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88368 00:25:59.911 [2024-11-04 14:58:29.778490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.911 [2024-11-04 14:58:29.778560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.911 [2024-11-04 14:58:29.778581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:00.479 [2024-11-04 14:58:30.085955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.416 14:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:26:01.416 00:26:01.416 real 0m21.534s 00:26:01.416 user 0m28.964s 00:26:01.416 sys 0m2.613s 00:26:01.416 14:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:01.416 ************************************ 00:26:01.416 END TEST raid_rebuild_test_sb_md_separate 00:26:01.416 14:58:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:01.416 ************************************ 00:26:01.416 14:58:31 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:26:01.416 14:58:31 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:26:01.416 14:58:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:01.416 14:58:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:01.416 14:58:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.416 ************************************ 00:26:01.416 START TEST raid_state_function_test_sb_md_interleaved 00:26:01.416 ************************************ 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89070 00:26:01.416 Process raid pid: 89070 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89070' 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89070 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89070 ']' 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.416 14:58:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:01.416 [2024-11-04 14:58:31.305785] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:01.675 [2024-11-04 14:58:31.305989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.675 [2024-11-04 14:58:31.480819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.933 [2024-11-04 14:58:31.609760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.192 [2024-11-04 14:58:31.827785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.192 [2024-11-04 14:58:31.827839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.450 [2024-11-04 14:58:32.290040] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.450 [2024-11-04 14:58:32.290132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.450 [2024-11-04 14:58:32.290155] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.450 [2024-11-04 14:58:32.290178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.450 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.451 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.709 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.709 "name": "Existed_Raid", 00:26:02.709 "uuid": "478fbdf8-3e0f-4818-8b16-81f8548d583c", 00:26:02.709 "strip_size_kb": 0, 00:26:02.709 "state": "configuring", 00:26:02.709 "raid_level": "raid1", 00:26:02.709 "superblock": true, 00:26:02.709 "num_base_bdevs": 2, 00:26:02.709 "num_base_bdevs_discovered": 0, 00:26:02.709 "num_base_bdevs_operational": 2, 00:26:02.709 "base_bdevs_list": [ 00:26:02.709 { 00:26:02.709 "name": "BaseBdev1", 00:26:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.709 "is_configured": false, 00:26:02.709 "data_offset": 0, 00:26:02.709 "data_size": 0 00:26:02.709 }, 00:26:02.709 { 00:26:02.709 "name": "BaseBdev2", 00:26:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.709 "is_configured": false, 00:26:02.709 "data_offset": 0, 00:26:02.709 "data_size": 0 00:26:02.709 } 00:26:02.709 ] 00:26:02.709 }' 00:26:02.709 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.709 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.968 [2024-11-04 14:58:32.786117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:02.968 [2024-11-04 14:58:32.786168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.968 [2024-11-04 14:58:32.794126] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.968 [2024-11-04 14:58:32.794187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.968 [2024-11-04 14:58:32.794208] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.968 [2024-11-04 14:58:32.794264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.968 [2024-11-04 14:58:32.839586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:02.968 BaseBdev1 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.968 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.227 [ 00:26:03.227 { 00:26:03.227 "name": "BaseBdev1", 00:26:03.227 "aliases": [ 00:26:03.227 "07e16433-07a8-4276-800a-b6af89d6d515" 00:26:03.227 ], 00:26:03.227 "product_name": "Malloc disk", 00:26:03.227 "block_size": 4128, 00:26:03.227 "num_blocks": 8192, 00:26:03.227 "uuid": "07e16433-07a8-4276-800a-b6af89d6d515", 00:26:03.227 "md_size": 32, 00:26:03.227 "md_interleave": true, 00:26:03.227 "dif_type": 0, 00:26:03.227 "assigned_rate_limits": { 00:26:03.227 "rw_ios_per_sec": 0, 00:26:03.227 "rw_mbytes_per_sec": 0, 00:26:03.227 "r_mbytes_per_sec": 0, 00:26:03.227 "w_mbytes_per_sec": 0 00:26:03.227 }, 00:26:03.227 "claimed": true, 00:26:03.227 "claim_type": "exclusive_write", 00:26:03.227 "zoned": false, 00:26:03.227 "supported_io_types": { 00:26:03.227 "read": true, 00:26:03.227 "write": true, 00:26:03.227 "unmap": true, 00:26:03.227 "flush": true, 00:26:03.227 "reset": true, 00:26:03.227 "nvme_admin": false, 00:26:03.227 "nvme_io": false, 00:26:03.227 "nvme_io_md": false, 00:26:03.227 "write_zeroes": true, 00:26:03.227 "zcopy": true, 00:26:03.227 "get_zone_info": false, 00:26:03.227 "zone_management": false, 00:26:03.227 "zone_append": false, 00:26:03.227 "compare": false, 00:26:03.227 "compare_and_write": false, 00:26:03.227 "abort": true, 00:26:03.227 "seek_hole": false, 00:26:03.227 "seek_data": false, 00:26:03.227 "copy": true, 00:26:03.227 "nvme_iov_md": false 00:26:03.227 }, 00:26:03.227 "memory_domains": [ 00:26:03.227 { 00:26:03.227 "dma_device_id": "system", 00:26:03.227 "dma_device_type": 1 00:26:03.227 }, 00:26:03.227 { 00:26:03.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.227 "dma_device_type": 2 00:26:03.227 } 00:26:03.227 ], 00:26:03.227 "driver_specific": {} 00:26:03.227 } 00:26:03.227 ] 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:03.227 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.228 "name": "Existed_Raid", 00:26:03.228 "uuid": "73109b04-20ce-401d-bf20-5f400ff091a3", 00:26:03.228 "strip_size_kb": 0, 00:26:03.228 "state": "configuring", 00:26:03.228 "raid_level": "raid1", 00:26:03.228 "superblock": true, 00:26:03.228 "num_base_bdevs": 2, 00:26:03.228 "num_base_bdevs_discovered": 1, 00:26:03.228 "num_base_bdevs_operational": 2, 00:26:03.228 "base_bdevs_list": [ 00:26:03.228 { 00:26:03.228 "name": "BaseBdev1", 00:26:03.228 "uuid": "07e16433-07a8-4276-800a-b6af89d6d515", 00:26:03.228 "is_configured": true, 00:26:03.228 "data_offset": 256, 00:26:03.228 "data_size": 7936 00:26:03.228 }, 00:26:03.228 { 00:26:03.228 "name": "BaseBdev2", 00:26:03.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.228 "is_configured": false, 00:26:03.228 "data_offset": 0, 00:26:03.228 "data_size": 0 00:26:03.228 } 00:26:03.228 ] 00:26:03.228 }' 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.228 14:58:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.796 [2024-11-04 14:58:33.407924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.796 [2024-11-04 14:58:33.408002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.796 [2024-11-04 14:58:33.415988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:03.796 [2024-11-04 14:58:33.418812] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:03.796 [2024-11-04 14:58:33.418880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.796 "name": "Existed_Raid", 00:26:03.796 "uuid": "45f06be8-7425-4fe1-9b9b-36b7bd75b92a", 00:26:03.796 "strip_size_kb": 0, 00:26:03.796 "state": "configuring", 00:26:03.796 "raid_level": "raid1", 00:26:03.796 "superblock": true, 00:26:03.796 "num_base_bdevs": 2, 00:26:03.796 "num_base_bdevs_discovered": 1, 00:26:03.796 "num_base_bdevs_operational": 2, 00:26:03.796 "base_bdevs_list": [ 00:26:03.796 { 00:26:03.796 "name": "BaseBdev1", 00:26:03.796 "uuid": "07e16433-07a8-4276-800a-b6af89d6d515", 00:26:03.796 "is_configured": true, 00:26:03.796 "data_offset": 256, 00:26:03.796 "data_size": 7936 00:26:03.796 }, 00:26:03.796 { 00:26:03.796 "name": "BaseBdev2", 00:26:03.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.796 "is_configured": false, 00:26:03.796 "data_offset": 0, 00:26:03.796 "data_size": 0 00:26:03.796 } 00:26:03.796 ] 00:26:03.796 }' 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.796 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.056 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:26:04.056 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.056 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.315 [2024-11-04 14:58:33.986375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:04.315 [2024-11-04 14:58:33.986679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:04.315 [2024-11-04 14:58:33.986697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:04.315 [2024-11-04 14:58:33.986842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:04.315 [2024-11-04 14:58:33.986947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:04.315 [2024-11-04 14:58:33.986972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:04.315 [2024-11-04 14:58:33.987063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.315 BaseBdev2 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.315 14:58:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.315 [ 00:26:04.315 { 00:26:04.315 "name": "BaseBdev2", 00:26:04.315 "aliases": [ 00:26:04.315 "c4d168a5-c84c-48c0-979e-c305208f818c" 00:26:04.315 ], 00:26:04.315 "product_name": "Malloc disk", 00:26:04.315 "block_size": 4128, 00:26:04.315 "num_blocks": 8192, 00:26:04.315 "uuid": "c4d168a5-c84c-48c0-979e-c305208f818c", 00:26:04.315 "md_size": 32, 00:26:04.315 "md_interleave": true, 00:26:04.315 "dif_type": 0, 00:26:04.315 "assigned_rate_limits": { 00:26:04.315 "rw_ios_per_sec": 0, 00:26:04.315 "rw_mbytes_per_sec": 0, 00:26:04.315 "r_mbytes_per_sec": 0, 00:26:04.315 "w_mbytes_per_sec": 0 00:26:04.315 }, 00:26:04.315 "claimed": true, 00:26:04.315 "claim_type": "exclusive_write", 00:26:04.315 "zoned": false, 00:26:04.315 "supported_io_types": { 00:26:04.315 "read": true, 00:26:04.315 "write": true, 00:26:04.315 "unmap": true, 00:26:04.315 "flush": true, 00:26:04.315 "reset": true, 00:26:04.315 "nvme_admin": false, 00:26:04.315 "nvme_io": false, 00:26:04.315 "nvme_io_md": false, 00:26:04.315 "write_zeroes": true, 00:26:04.315 "zcopy": true, 00:26:04.315 "get_zone_info": false, 00:26:04.315 "zone_management": false, 00:26:04.315 "zone_append": false, 00:26:04.315 "compare": false, 00:26:04.315 "compare_and_write": false, 00:26:04.315 "abort": true, 00:26:04.315 "seek_hole": false, 00:26:04.315 "seek_data": false, 00:26:04.315 "copy": true, 00:26:04.315 "nvme_iov_md": false 00:26:04.315 }, 00:26:04.315 "memory_domains": [ 00:26:04.315 { 00:26:04.315 "dma_device_id": "system", 00:26:04.315 "dma_device_type": 1 00:26:04.315 }, 00:26:04.315 { 00:26:04.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.315 "dma_device_type": 2 00:26:04.315 } 00:26:04.315 ], 00:26:04.315 "driver_specific": {} 00:26:04.315 } 00:26:04.315 ] 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.315 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.315 "name": "Existed_Raid", 00:26:04.315 "uuid": "45f06be8-7425-4fe1-9b9b-36b7bd75b92a", 00:26:04.315 "strip_size_kb": 0, 00:26:04.315 "state": "online", 00:26:04.315 "raid_level": "raid1", 00:26:04.315 "superblock": true, 00:26:04.315 "num_base_bdevs": 2, 00:26:04.315 "num_base_bdevs_discovered": 2, 00:26:04.315 "num_base_bdevs_operational": 2, 00:26:04.315 "base_bdevs_list": [ 00:26:04.315 { 00:26:04.315 "name": "BaseBdev1", 00:26:04.316 "uuid": "07e16433-07a8-4276-800a-b6af89d6d515", 00:26:04.316 "is_configured": true, 00:26:04.316 "data_offset": 256, 00:26:04.316 "data_size": 7936 00:26:04.316 }, 00:26:04.316 { 00:26:04.316 "name": "BaseBdev2", 00:26:04.316 "uuid": "c4d168a5-c84c-48c0-979e-c305208f818c", 00:26:04.316 "is_configured": true, 00:26:04.316 "data_offset": 256, 00:26:04.316 "data_size": 7936 00:26:04.316 } 00:26:04.316 ] 00:26:04.316 }' 00:26:04.316 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.316 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.882 [2024-11-04 14:58:34.563056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.882 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.882 "name": "Existed_Raid", 00:26:04.882 "aliases": [ 00:26:04.882 "45f06be8-7425-4fe1-9b9b-36b7bd75b92a" 00:26:04.882 ], 00:26:04.882 "product_name": "Raid Volume", 00:26:04.882 "block_size": 4128, 00:26:04.883 "num_blocks": 7936, 00:26:04.883 "uuid": "45f06be8-7425-4fe1-9b9b-36b7bd75b92a", 00:26:04.883 "md_size": 32, 00:26:04.883 "md_interleave": true, 00:26:04.883 "dif_type": 0, 00:26:04.883 "assigned_rate_limits": { 00:26:04.883 "rw_ios_per_sec": 0, 00:26:04.883 "rw_mbytes_per_sec": 0, 00:26:04.883 "r_mbytes_per_sec": 0, 00:26:04.883 "w_mbytes_per_sec": 0 00:26:04.883 }, 00:26:04.883 "claimed": false, 00:26:04.883 "zoned": false, 00:26:04.883 "supported_io_types": { 00:26:04.883 "read": true, 00:26:04.883 "write": true, 00:26:04.883 "unmap": false, 00:26:04.883 "flush": false, 00:26:04.883 "reset": true, 00:26:04.883 "nvme_admin": false, 00:26:04.883 "nvme_io": false, 00:26:04.883 "nvme_io_md": false, 00:26:04.883 "write_zeroes": true, 00:26:04.883 "zcopy": false, 00:26:04.883 "get_zone_info": false, 00:26:04.883 "zone_management": false, 00:26:04.883 "zone_append": false, 00:26:04.883 "compare": false, 00:26:04.883 "compare_and_write": false, 00:26:04.883 "abort": false, 00:26:04.883 "seek_hole": false, 00:26:04.883 "seek_data": false, 00:26:04.883 "copy": false, 00:26:04.883 "nvme_iov_md": false 00:26:04.883 }, 00:26:04.883 "memory_domains": [ 00:26:04.883 { 00:26:04.883 "dma_device_id": "system", 00:26:04.883 "dma_device_type": 1 00:26:04.883 }, 00:26:04.883 { 00:26:04.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.883 "dma_device_type": 2 00:26:04.883 }, 00:26:04.883 { 00:26:04.883 "dma_device_id": "system", 00:26:04.883 "dma_device_type": 1 00:26:04.883 }, 00:26:04.883 { 00:26:04.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.883 "dma_device_type": 2 00:26:04.883 } 00:26:04.883 ], 00:26:04.883 "driver_specific": { 00:26:04.883 "raid": { 00:26:04.883 "uuid": "45f06be8-7425-4fe1-9b9b-36b7bd75b92a", 00:26:04.883 "strip_size_kb": 0, 00:26:04.883 "state": "online", 00:26:04.883 "raid_level": "raid1", 00:26:04.883 "superblock": true, 00:26:04.883 "num_base_bdevs": 2, 00:26:04.883 "num_base_bdevs_discovered": 2, 00:26:04.883 "num_base_bdevs_operational": 2, 00:26:04.883 "base_bdevs_list": [ 00:26:04.883 { 00:26:04.883 "name": "BaseBdev1", 00:26:04.883 "uuid": "07e16433-07a8-4276-800a-b6af89d6d515", 00:26:04.883 "is_configured": true, 00:26:04.883 "data_offset": 256, 00:26:04.883 "data_size": 7936 00:26:04.883 }, 00:26:04.883 { 00:26:04.883 "name": "BaseBdev2", 00:26:04.883 "uuid": "c4d168a5-c84c-48c0-979e-c305208f818c", 00:26:04.883 "is_configured": true, 00:26:04.883 "data_offset": 256, 00:26:04.883 "data_size": 7936 00:26:04.883 } 00:26:04.883 ] 00:26:04.883 } 00:26:04.883 } 00:26:04.883 }' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:04.883 BaseBdev2' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:26:04.883 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.142 [2024-11-04 14:58:34.826773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.142 "name": "Existed_Raid", 00:26:05.142 "uuid": "45f06be8-7425-4fe1-9b9b-36b7bd75b92a", 00:26:05.142 "strip_size_kb": 0, 00:26:05.142 "state": "online", 00:26:05.142 "raid_level": "raid1", 00:26:05.142 "superblock": true, 00:26:05.142 "num_base_bdevs": 2, 00:26:05.142 "num_base_bdevs_discovered": 1, 00:26:05.142 "num_base_bdevs_operational": 1, 00:26:05.142 "base_bdevs_list": [ 00:26:05.142 { 00:26:05.142 "name": null, 00:26:05.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.142 "is_configured": false, 00:26:05.142 "data_offset": 0, 00:26:05.142 "data_size": 7936 00:26:05.142 }, 00:26:05.142 { 00:26:05.142 "name": "BaseBdev2", 00:26:05.142 "uuid": "c4d168a5-c84c-48c0-979e-c305208f818c", 00:26:05.142 "is_configured": true, 00:26:05.142 "data_offset": 256, 00:26:05.142 "data_size": 7936 00:26:05.142 } 00:26:05.142 ] 00:26:05.142 }' 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.142 14:58:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.737 [2024-11-04 14:58:35.500918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:05.737 [2024-11-04 14:58:35.501112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.737 [2024-11-04 14:58:35.581504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.737 [2024-11-04 14:58:35.581604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.737 [2024-11-04 14:58:35.581626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.737 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89070 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89070 ']' 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89070 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89070 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:05.995 killing process with pid 89070 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89070' 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89070 00:26:05.995 [2024-11-04 14:58:35.678778] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:05.995 14:58:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89070 00:26:05.995 [2024-11-04 14:58:35.694424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:06.931 14:58:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:26:06.931 00:26:06.931 real 0m5.495s 00:26:06.931 user 0m8.329s 00:26:06.931 sys 0m0.830s 00:26:06.931 14:58:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:06.931 14:58:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:06.931 ************************************ 00:26:06.931 END TEST raid_state_function_test_sb_md_interleaved 00:26:06.931 ************************************ 00:26:06.931 14:58:36 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:26:06.931 14:58:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:06.931 14:58:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:06.931 14:58:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:06.931 ************************************ 00:26:06.931 START TEST raid_superblock_test_md_interleaved 00:26:06.931 ************************************ 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89323 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89323 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89323 ']' 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:06.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:06.931 14:58:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:07.247 [2024-11-04 14:58:36.866482] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:07.247 [2024-11-04 14:58:36.866678] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89323 ] 00:26:07.247 [2024-11-04 14:58:37.037543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.505 [2024-11-04 14:58:37.172427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.505 [2024-11-04 14:58:37.379704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.505 [2024-11-04 14:58:37.379785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:08.072 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.073 malloc1 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.073 [2024-11-04 14:58:37.910694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:08.073 [2024-11-04 14:58:37.910819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.073 [2024-11-04 14:58:37.910885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:08.073 [2024-11-04 14:58:37.910908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.073 [2024-11-04 14:58:37.914046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.073 [2024-11-04 14:58:37.914102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:08.073 pt1 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.073 malloc2 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.073 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.331 [2024-11-04 14:58:37.968441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:08.331 [2024-11-04 14:58:37.968513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.331 [2024-11-04 14:58:37.968575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:08.331 [2024-11-04 14:58:37.968611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.331 [2024-11-04 14:58:37.971608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.331 [2024-11-04 14:58:37.971695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:08.331 pt2 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.331 [2024-11-04 14:58:37.980590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:08.331 [2024-11-04 14:58:37.983631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:08.331 [2024-11-04 14:58:37.983953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:08.331 [2024-11-04 14:58:37.983991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:08.331 [2024-11-04 14:58:37.984083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:08.331 [2024-11-04 14:58:37.984183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:08.331 [2024-11-04 14:58:37.984220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:08.331 [2024-11-04 14:58:37.984385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.331 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.332 14:58:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.332 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.332 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.332 "name": "raid_bdev1", 00:26:08.332 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:08.332 "strip_size_kb": 0, 00:26:08.332 "state": "online", 00:26:08.332 "raid_level": "raid1", 00:26:08.332 "superblock": true, 00:26:08.332 "num_base_bdevs": 2, 00:26:08.332 "num_base_bdevs_discovered": 2, 00:26:08.332 "num_base_bdevs_operational": 2, 00:26:08.332 "base_bdevs_list": [ 00:26:08.332 { 00:26:08.332 "name": "pt1", 00:26:08.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:08.332 "is_configured": true, 00:26:08.332 "data_offset": 256, 00:26:08.332 "data_size": 7936 00:26:08.332 }, 00:26:08.332 { 00:26:08.332 "name": "pt2", 00:26:08.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:08.332 "is_configured": true, 00:26:08.332 "data_offset": 256, 00:26:08.332 "data_size": 7936 00:26:08.332 } 00:26:08.332 ] 00:26:08.332 }' 00:26:08.332 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.332 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:08.899 [2024-11-04 14:58:38.513030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:08.899 "name": "raid_bdev1", 00:26:08.899 "aliases": [ 00:26:08.899 "ce6a892a-73ae-473e-893a-8465decfd5b5" 00:26:08.899 ], 00:26:08.899 "product_name": "Raid Volume", 00:26:08.899 "block_size": 4128, 00:26:08.899 "num_blocks": 7936, 00:26:08.899 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:08.899 "md_size": 32, 00:26:08.899 "md_interleave": true, 00:26:08.899 "dif_type": 0, 00:26:08.899 "assigned_rate_limits": { 00:26:08.899 "rw_ios_per_sec": 0, 00:26:08.899 "rw_mbytes_per_sec": 0, 00:26:08.899 "r_mbytes_per_sec": 0, 00:26:08.899 "w_mbytes_per_sec": 0 00:26:08.899 }, 00:26:08.899 "claimed": false, 00:26:08.899 "zoned": false, 00:26:08.899 "supported_io_types": { 00:26:08.899 "read": true, 00:26:08.899 "write": true, 00:26:08.899 "unmap": false, 00:26:08.899 "flush": false, 00:26:08.899 "reset": true, 00:26:08.899 "nvme_admin": false, 00:26:08.899 "nvme_io": false, 00:26:08.899 "nvme_io_md": false, 00:26:08.899 "write_zeroes": true, 00:26:08.899 "zcopy": false, 00:26:08.899 "get_zone_info": false, 00:26:08.899 "zone_management": false, 00:26:08.899 "zone_append": false, 00:26:08.899 "compare": false, 00:26:08.899 "compare_and_write": false, 00:26:08.899 "abort": false, 00:26:08.899 "seek_hole": false, 00:26:08.899 "seek_data": false, 00:26:08.899 "copy": false, 00:26:08.899 "nvme_iov_md": false 00:26:08.899 }, 00:26:08.899 "memory_domains": [ 00:26:08.899 { 00:26:08.899 "dma_device_id": "system", 00:26:08.899 "dma_device_type": 1 00:26:08.899 }, 00:26:08.899 { 00:26:08.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.899 "dma_device_type": 2 00:26:08.899 }, 00:26:08.899 { 00:26:08.899 "dma_device_id": "system", 00:26:08.899 "dma_device_type": 1 00:26:08.899 }, 00:26:08.899 { 00:26:08.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.899 "dma_device_type": 2 00:26:08.899 } 00:26:08.899 ], 00:26:08.899 "driver_specific": { 00:26:08.899 "raid": { 00:26:08.899 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:08.899 "strip_size_kb": 0, 00:26:08.899 "state": "online", 00:26:08.899 "raid_level": "raid1", 00:26:08.899 "superblock": true, 00:26:08.899 "num_base_bdevs": 2, 00:26:08.899 "num_base_bdevs_discovered": 2, 00:26:08.899 "num_base_bdevs_operational": 2, 00:26:08.899 "base_bdevs_list": [ 00:26:08.899 { 00:26:08.899 "name": "pt1", 00:26:08.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:08.899 "is_configured": true, 00:26:08.899 "data_offset": 256, 00:26:08.899 "data_size": 7936 00:26:08.899 }, 00:26:08.899 { 00:26:08.899 "name": "pt2", 00:26:08.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:08.899 "is_configured": true, 00:26:08.899 "data_offset": 256, 00:26:08.899 "data_size": 7936 00:26:08.899 } 00:26:08.899 ] 00:26:08.899 } 00:26:08.899 } 00:26:08.899 }' 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:08.899 pt2' 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.899 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.900 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.900 [2024-11-04 14:58:38.789093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ce6a892a-73ae-473e-893a-8465decfd5b5 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z ce6a892a-73ae-473e-893a-8465decfd5b5 ']' 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.159 [2024-11-04 14:58:38.840811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:09.159 [2024-11-04 14:58:38.840836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.159 [2024-11-04 14:58:38.840942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.159 [2024-11-04 14:58:38.841012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.159 [2024-11-04 14:58:38.841029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.159 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.159 [2024-11-04 14:58:38.980874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:09.159 [2024-11-04 14:58:38.983622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:09.159 [2024-11-04 14:58:38.983770] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:09.160 [2024-11-04 14:58:38.983855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:09.160 [2024-11-04 14:58:38.983879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:09.160 [2024-11-04 14:58:38.983892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:09.160 request: 00:26:09.160 { 00:26:09.160 "name": "raid_bdev1", 00:26:09.160 "raid_level": "raid1", 00:26:09.160 "base_bdevs": [ 00:26:09.160 "malloc1", 00:26:09.160 "malloc2" 00:26:09.160 ], 00:26:09.160 "superblock": false, 00:26:09.160 "method": "bdev_raid_create", 00:26:09.160 "req_id": 1 00:26:09.160 } 00:26:09.160 Got JSON-RPC error response 00:26:09.160 response: 00:26:09.160 { 00:26:09.160 "code": -17, 00:26:09.160 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:09.160 } 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.160 14:58:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:09.160 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.160 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:09.160 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:09.160 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:09.160 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.160 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.160 [2024-11-04 14:58:39.048914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:09.160 [2024-11-04 14:58:39.048976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.160 [2024-11-04 14:58:39.049002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:09.160 [2024-11-04 14:58:39.049019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.418 [2024-11-04 14:58:39.052248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.419 [2024-11-04 14:58:39.052306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:09.419 [2024-11-04 14:58:39.052370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:09.419 [2024-11-04 14:58:39.052451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:09.419 pt1 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.419 "name": "raid_bdev1", 00:26:09.419 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:09.419 "strip_size_kb": 0, 00:26:09.419 "state": "configuring", 00:26:09.419 "raid_level": "raid1", 00:26:09.419 "superblock": true, 00:26:09.419 "num_base_bdevs": 2, 00:26:09.419 "num_base_bdevs_discovered": 1, 00:26:09.419 "num_base_bdevs_operational": 2, 00:26:09.419 "base_bdevs_list": [ 00:26:09.419 { 00:26:09.419 "name": "pt1", 00:26:09.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.419 "is_configured": true, 00:26:09.419 "data_offset": 256, 00:26:09.419 "data_size": 7936 00:26:09.419 }, 00:26:09.419 { 00:26:09.419 "name": null, 00:26:09.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:09.419 "is_configured": false, 00:26:09.419 "data_offset": 256, 00:26:09.419 "data_size": 7936 00:26:09.419 } 00:26:09.419 ] 00:26:09.419 }' 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.419 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.985 [2024-11-04 14:58:39.585028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:09.985 [2024-11-04 14:58:39.585138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.985 [2024-11-04 14:58:39.585171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:09.985 [2024-11-04 14:58:39.585188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.985 [2024-11-04 14:58:39.585479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.985 [2024-11-04 14:58:39.585507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:09.985 [2024-11-04 14:58:39.585598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:09.985 [2024-11-04 14:58:39.585651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:09.985 [2024-11-04 14:58:39.585780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:09.985 [2024-11-04 14:58:39.585801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:09.985 [2024-11-04 14:58:39.585920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:09.985 [2024-11-04 14:58:39.586039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:09.985 [2024-11-04 14:58:39.586063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:09.985 [2024-11-04 14:58:39.586166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.985 pt2 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.985 "name": "raid_bdev1", 00:26:09.985 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:09.985 "strip_size_kb": 0, 00:26:09.985 "state": "online", 00:26:09.985 "raid_level": "raid1", 00:26:09.985 "superblock": true, 00:26:09.985 "num_base_bdevs": 2, 00:26:09.985 "num_base_bdevs_discovered": 2, 00:26:09.985 "num_base_bdevs_operational": 2, 00:26:09.985 "base_bdevs_list": [ 00:26:09.985 { 00:26:09.985 "name": "pt1", 00:26:09.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:09.985 "is_configured": true, 00:26:09.985 "data_offset": 256, 00:26:09.985 "data_size": 7936 00:26:09.985 }, 00:26:09.985 { 00:26:09.985 "name": "pt2", 00:26:09.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:09.985 "is_configured": true, 00:26:09.985 "data_offset": 256, 00:26:09.985 "data_size": 7936 00:26:09.985 } 00:26:09.985 ] 00:26:09.985 }' 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.985 14:58:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:10.243 [2024-11-04 14:58:40.109575] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:10.243 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.501 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:10.501 "name": "raid_bdev1", 00:26:10.501 "aliases": [ 00:26:10.501 "ce6a892a-73ae-473e-893a-8465decfd5b5" 00:26:10.501 ], 00:26:10.501 "product_name": "Raid Volume", 00:26:10.501 "block_size": 4128, 00:26:10.501 "num_blocks": 7936, 00:26:10.501 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:10.501 "md_size": 32, 00:26:10.501 "md_interleave": true, 00:26:10.501 "dif_type": 0, 00:26:10.501 "assigned_rate_limits": { 00:26:10.501 "rw_ios_per_sec": 0, 00:26:10.501 "rw_mbytes_per_sec": 0, 00:26:10.501 "r_mbytes_per_sec": 0, 00:26:10.501 "w_mbytes_per_sec": 0 00:26:10.501 }, 00:26:10.501 "claimed": false, 00:26:10.501 "zoned": false, 00:26:10.501 "supported_io_types": { 00:26:10.501 "read": true, 00:26:10.501 "write": true, 00:26:10.501 "unmap": false, 00:26:10.501 "flush": false, 00:26:10.501 "reset": true, 00:26:10.501 "nvme_admin": false, 00:26:10.501 "nvme_io": false, 00:26:10.501 "nvme_io_md": false, 00:26:10.501 "write_zeroes": true, 00:26:10.501 "zcopy": false, 00:26:10.501 "get_zone_info": false, 00:26:10.501 "zone_management": false, 00:26:10.501 "zone_append": false, 00:26:10.501 "compare": false, 00:26:10.501 "compare_and_write": false, 00:26:10.501 "abort": false, 00:26:10.501 "seek_hole": false, 00:26:10.501 "seek_data": false, 00:26:10.501 "copy": false, 00:26:10.501 "nvme_iov_md": false 00:26:10.501 }, 00:26:10.501 "memory_domains": [ 00:26:10.501 { 00:26:10.501 "dma_device_id": "system", 00:26:10.501 "dma_device_type": 1 00:26:10.501 }, 00:26:10.501 { 00:26:10.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.501 "dma_device_type": 2 00:26:10.501 }, 00:26:10.501 { 00:26:10.501 "dma_device_id": "system", 00:26:10.501 "dma_device_type": 1 00:26:10.501 }, 00:26:10.501 { 00:26:10.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.501 "dma_device_type": 2 00:26:10.501 } 00:26:10.501 ], 00:26:10.501 "driver_specific": { 00:26:10.501 "raid": { 00:26:10.501 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:10.501 "strip_size_kb": 0, 00:26:10.501 "state": "online", 00:26:10.501 "raid_level": "raid1", 00:26:10.501 "superblock": true, 00:26:10.501 "num_base_bdevs": 2, 00:26:10.501 "num_base_bdevs_discovered": 2, 00:26:10.501 "num_base_bdevs_operational": 2, 00:26:10.501 "base_bdevs_list": [ 00:26:10.501 { 00:26:10.501 "name": "pt1", 00:26:10.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:10.501 "is_configured": true, 00:26:10.501 "data_offset": 256, 00:26:10.501 "data_size": 7936 00:26:10.501 }, 00:26:10.501 { 00:26:10.501 "name": "pt2", 00:26:10.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.501 "is_configured": true, 00:26:10.501 "data_offset": 256, 00:26:10.501 "data_size": 7936 00:26:10.501 } 00:26:10.501 ] 00:26:10.501 } 00:26:10.501 } 00:26:10.501 }' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:10.502 pt2' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.502 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:10.502 [2024-11-04 14:58:40.377543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' ce6a892a-73ae-473e-893a-8465decfd5b5 '!=' ce6a892a-73ae-473e-893a-8465decfd5b5 ']' 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.771 [2024-11-04 14:58:40.429296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:10.771 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.772 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.772 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.772 "name": "raid_bdev1", 00:26:10.772 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:10.772 "strip_size_kb": 0, 00:26:10.772 "state": "online", 00:26:10.772 "raid_level": "raid1", 00:26:10.772 "superblock": true, 00:26:10.772 "num_base_bdevs": 2, 00:26:10.772 "num_base_bdevs_discovered": 1, 00:26:10.772 "num_base_bdevs_operational": 1, 00:26:10.772 "base_bdevs_list": [ 00:26:10.772 { 00:26:10.772 "name": null, 00:26:10.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.772 "is_configured": false, 00:26:10.772 "data_offset": 0, 00:26:10.772 "data_size": 7936 00:26:10.772 }, 00:26:10.772 { 00:26:10.772 "name": "pt2", 00:26:10.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:10.772 "is_configured": true, 00:26:10.772 "data_offset": 256, 00:26:10.772 "data_size": 7936 00:26:10.772 } 00:26:10.772 ] 00:26:10.772 }' 00:26:10.772 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.772 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.342 [2024-11-04 14:58:40.953519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:11.342 [2024-11-04 14:58:40.953599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:11.342 [2024-11-04 14:58:40.953717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.342 [2024-11-04 14:58:40.953789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.342 [2024-11-04 14:58:40.953809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.342 14:58:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.342 [2024-11-04 14:58:41.029510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:11.342 [2024-11-04 14:58:41.029630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.342 [2024-11-04 14:58:41.029658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:11.342 [2024-11-04 14:58:41.029676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.342 [2024-11-04 14:58:41.032523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.342 [2024-11-04 14:58:41.032601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:11.342 [2024-11-04 14:58:41.032690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:11.342 [2024-11-04 14:58:41.032754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:11.342 [2024-11-04 14:58:41.032891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:11.342 [2024-11-04 14:58:41.032912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:11.342 [2024-11-04 14:58:41.033023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:11.342 [2024-11-04 14:58:41.033116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:11.342 [2024-11-04 14:58:41.033140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:11.342 [2024-11-04 14:58:41.033274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.342 pt2 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.342 "name": "raid_bdev1", 00:26:11.342 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:11.342 "strip_size_kb": 0, 00:26:11.342 "state": "online", 00:26:11.342 "raid_level": "raid1", 00:26:11.342 "superblock": true, 00:26:11.342 "num_base_bdevs": 2, 00:26:11.342 "num_base_bdevs_discovered": 1, 00:26:11.342 "num_base_bdevs_operational": 1, 00:26:11.342 "base_bdevs_list": [ 00:26:11.342 { 00:26:11.342 "name": null, 00:26:11.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.342 "is_configured": false, 00:26:11.342 "data_offset": 256, 00:26:11.342 "data_size": 7936 00:26:11.342 }, 00:26:11.342 { 00:26:11.342 "name": "pt2", 00:26:11.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:11.342 "is_configured": true, 00:26:11.342 "data_offset": 256, 00:26:11.342 "data_size": 7936 00:26:11.342 } 00:26:11.342 ] 00:26:11.342 }' 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.342 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.909 [2024-11-04 14:58:41.573630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:11.909 [2024-11-04 14:58:41.573681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:11.909 [2024-11-04 14:58:41.573779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.909 [2024-11-04 14:58:41.573884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.909 [2024-11-04 14:58:41.573920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.909 [2024-11-04 14:58:41.637668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:11.909 [2024-11-04 14:58:41.637738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.909 [2024-11-04 14:58:41.637772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:11.909 [2024-11-04 14:58:41.637787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.909 [2024-11-04 14:58:41.640558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.909 [2024-11-04 14:58:41.640646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:11.909 [2024-11-04 14:58:41.640716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:11.909 [2024-11-04 14:58:41.640770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:11.909 [2024-11-04 14:58:41.640920] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:11.909 [2024-11-04 14:58:41.640937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:11.909 [2024-11-04 14:58:41.640958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:11.909 [2024-11-04 14:58:41.641022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:11.909 [2024-11-04 14:58:41.641115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:11.909 [2024-11-04 14:58:41.641129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:11.909 [2024-11-04 14:58:41.641204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:11.909 [2024-11-04 14:58:41.641333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:11.909 [2024-11-04 14:58:41.641353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:11.909 [2024-11-04 14:58:41.641449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.909 pt1 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.909 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.909 "name": "raid_bdev1", 00:26:11.909 "uuid": "ce6a892a-73ae-473e-893a-8465decfd5b5", 00:26:11.909 "strip_size_kb": 0, 00:26:11.909 "state": "online", 00:26:11.909 "raid_level": "raid1", 00:26:11.910 "superblock": true, 00:26:11.910 "num_base_bdevs": 2, 00:26:11.910 "num_base_bdevs_discovered": 1, 00:26:11.910 "num_base_bdevs_operational": 1, 00:26:11.910 "base_bdevs_list": [ 00:26:11.910 { 00:26:11.910 "name": null, 00:26:11.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.910 "is_configured": false, 00:26:11.910 "data_offset": 256, 00:26:11.910 "data_size": 7936 00:26:11.910 }, 00:26:11.910 { 00:26:11.910 "name": "pt2", 00:26:11.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:11.910 "is_configured": true, 00:26:11.910 "data_offset": 256, 00:26:11.910 "data_size": 7936 00:26:11.910 } 00:26:11.910 ] 00:26:11.910 }' 00:26:11.910 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.910 14:58:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:12.477 [2024-11-04 14:58:42.222105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' ce6a892a-73ae-473e-893a-8465decfd5b5 '!=' ce6a892a-73ae-473e-893a-8465decfd5b5 ']' 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89323 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89323 ']' 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89323 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89323 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:12.477 killing process with pid 89323 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89323' 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89323 00:26:12.477 [2024-11-04 14:58:42.299069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:12.477 [2024-11-04 14:58:42.299176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:12.477 14:58:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89323 00:26:12.477 [2024-11-04 14:58:42.299300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:12.477 [2024-11-04 14:58:42.299325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:12.736 [2024-11-04 14:58:42.456066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:13.672 14:58:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:26:13.672 00:26:13.672 real 0m6.716s 00:26:13.672 user 0m10.662s 00:26:13.672 sys 0m1.004s 00:26:13.672 14:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:13.672 14:58:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:13.672 ************************************ 00:26:13.672 END TEST raid_superblock_test_md_interleaved 00:26:13.672 ************************************ 00:26:13.672 14:58:43 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:26:13.672 14:58:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:26:13.672 14:58:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:13.672 14:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:13.672 ************************************ 00:26:13.672 START TEST raid_rebuild_test_sb_md_interleaved 00:26:13.672 ************************************ 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89654 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89654 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89654 ']' 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:13.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:13.672 14:58:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:13.931 [2024-11-04 14:58:43.635675] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:13.931 [2024-11-04 14:58:43.635854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89654 ] 00:26:13.931 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:13.931 Zero copy mechanism will not be used. 00:26:13.931 [2024-11-04 14:58:43.805057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.189 [2024-11-04 14:58:43.928837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.448 [2024-11-04 14:58:44.144593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:14.448 [2024-11-04 14:58:44.144662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 BaseBdev1_malloc 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 [2024-11-04 14:58:44.698573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:15.015 [2024-11-04 14:58:44.698688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.015 [2024-11-04 14:58:44.698717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:15.015 [2024-11-04 14:58:44.698736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.015 [2024-11-04 14:58:44.701217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.015 [2024-11-04 14:58:44.701316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:15.015 BaseBdev1 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 BaseBdev2_malloc 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 [2024-11-04 14:58:44.754454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:15.015 [2024-11-04 14:58:44.754555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.015 [2024-11-04 14:58:44.754582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:15.015 [2024-11-04 14:58:44.754617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.015 [2024-11-04 14:58:44.757049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.015 [2024-11-04 14:58:44.757119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:15.015 BaseBdev2 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 spare_malloc 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 spare_delay 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 [2024-11-04 14:58:44.832082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:15.015 [2024-11-04 14:58:44.832189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.015 [2024-11-04 14:58:44.832219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:15.015 [2024-11-04 14:58:44.832262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.015 [2024-11-04 14:58:44.834921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.015 [2024-11-04 14:58:44.834992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:15.015 spare 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 [2024-11-04 14:58:44.840130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:15.015 [2024-11-04 14:58:44.842822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:15.015 [2024-11-04 14:58:44.843113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:15.015 [2024-11-04 14:58:44.843150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:15.015 [2024-11-04 14:58:44.843278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:15.015 [2024-11-04 14:58:44.843399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:15.015 [2024-11-04 14:58:44.843424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:15.015 [2024-11-04 14:58:44.843520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.015 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.016 "name": "raid_bdev1", 00:26:15.016 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:15.016 "strip_size_kb": 0, 00:26:15.016 "state": "online", 00:26:15.016 "raid_level": "raid1", 00:26:15.016 "superblock": true, 00:26:15.016 "num_base_bdevs": 2, 00:26:15.016 "num_base_bdevs_discovered": 2, 00:26:15.016 "num_base_bdevs_operational": 2, 00:26:15.016 "base_bdevs_list": [ 00:26:15.016 { 00:26:15.016 "name": "BaseBdev1", 00:26:15.016 "uuid": "3e159197-7cda-5c98-8e2e-a827d8592c1c", 00:26:15.016 "is_configured": true, 00:26:15.016 "data_offset": 256, 00:26:15.016 "data_size": 7936 00:26:15.016 }, 00:26:15.016 { 00:26:15.016 "name": "BaseBdev2", 00:26:15.016 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:15.016 "is_configured": true, 00:26:15.016 "data_offset": 256, 00:26:15.016 "data_size": 7936 00:26:15.016 } 00:26:15.016 ] 00:26:15.016 }' 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.016 14:58:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 [2024-11-04 14:58:45.376751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.583 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.844 [2024-11-04 14:58:45.476315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.844 "name": "raid_bdev1", 00:26:15.844 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:15.844 "strip_size_kb": 0, 00:26:15.844 "state": "online", 00:26:15.844 "raid_level": "raid1", 00:26:15.844 "superblock": true, 00:26:15.844 "num_base_bdevs": 2, 00:26:15.844 "num_base_bdevs_discovered": 1, 00:26:15.844 "num_base_bdevs_operational": 1, 00:26:15.844 "base_bdevs_list": [ 00:26:15.844 { 00:26:15.844 "name": null, 00:26:15.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.844 "is_configured": false, 00:26:15.844 "data_offset": 0, 00:26:15.844 "data_size": 7936 00:26:15.844 }, 00:26:15.844 { 00:26:15.844 "name": "BaseBdev2", 00:26:15.844 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:15.844 "is_configured": true, 00:26:15.844 "data_offset": 256, 00:26:15.844 "data_size": 7936 00:26:15.844 } 00:26:15.844 ] 00:26:15.844 }' 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.844 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:16.102 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:16.102 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.102 14:58:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:16.360 [2024-11-04 14:58:45.996504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:16.360 [2024-11-04 14:58:46.013146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:16.360 14:58:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.360 14:58:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:16.360 [2024-11-04 14:58:46.015935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.296 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:17.296 "name": "raid_bdev1", 00:26:17.296 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:17.296 "strip_size_kb": 0, 00:26:17.296 "state": "online", 00:26:17.296 "raid_level": "raid1", 00:26:17.296 "superblock": true, 00:26:17.296 "num_base_bdevs": 2, 00:26:17.296 "num_base_bdevs_discovered": 2, 00:26:17.296 "num_base_bdevs_operational": 2, 00:26:17.296 "process": { 00:26:17.296 "type": "rebuild", 00:26:17.296 "target": "spare", 00:26:17.296 "progress": { 00:26:17.296 "blocks": 2560, 00:26:17.296 "percent": 32 00:26:17.296 } 00:26:17.296 }, 00:26:17.296 "base_bdevs_list": [ 00:26:17.296 { 00:26:17.296 "name": "spare", 00:26:17.296 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:17.296 "is_configured": true, 00:26:17.296 "data_offset": 256, 00:26:17.296 "data_size": 7936 00:26:17.296 }, 00:26:17.296 { 00:26:17.296 "name": "BaseBdev2", 00:26:17.296 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:17.296 "is_configured": true, 00:26:17.296 "data_offset": 256, 00:26:17.296 "data_size": 7936 00:26:17.297 } 00:26:17.297 ] 00:26:17.297 }' 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.297 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:17.297 [2024-11-04 14:58:47.182184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:17.555 [2024-11-04 14:58:47.226920] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:17.555 [2024-11-04 14:58:47.227024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.555 [2024-11-04 14:58:47.227092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:17.555 [2024-11-04 14:58:47.227111] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.555 "name": "raid_bdev1", 00:26:17.555 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:17.555 "strip_size_kb": 0, 00:26:17.555 "state": "online", 00:26:17.555 "raid_level": "raid1", 00:26:17.555 "superblock": true, 00:26:17.555 "num_base_bdevs": 2, 00:26:17.555 "num_base_bdevs_discovered": 1, 00:26:17.555 "num_base_bdevs_operational": 1, 00:26:17.555 "base_bdevs_list": [ 00:26:17.555 { 00:26:17.555 "name": null, 00:26:17.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.555 "is_configured": false, 00:26:17.555 "data_offset": 0, 00:26:17.555 "data_size": 7936 00:26:17.555 }, 00:26:17.555 { 00:26:17.555 "name": "BaseBdev2", 00:26:17.555 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:17.555 "is_configured": true, 00:26:17.555 "data_offset": 256, 00:26:17.555 "data_size": 7936 00:26:17.555 } 00:26:17.555 ] 00:26:17.555 }' 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.555 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.122 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:18.122 "name": "raid_bdev1", 00:26:18.122 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:18.122 "strip_size_kb": 0, 00:26:18.122 "state": "online", 00:26:18.122 "raid_level": "raid1", 00:26:18.122 "superblock": true, 00:26:18.123 "num_base_bdevs": 2, 00:26:18.123 "num_base_bdevs_discovered": 1, 00:26:18.123 "num_base_bdevs_operational": 1, 00:26:18.123 "base_bdevs_list": [ 00:26:18.123 { 00:26:18.123 "name": null, 00:26:18.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.123 "is_configured": false, 00:26:18.123 "data_offset": 0, 00:26:18.123 "data_size": 7936 00:26:18.123 }, 00:26:18.123 { 00:26:18.123 "name": "BaseBdev2", 00:26:18.123 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:18.123 "is_configured": true, 00:26:18.123 "data_offset": 256, 00:26:18.123 "data_size": 7936 00:26:18.123 } 00:26:18.123 ] 00:26:18.123 }' 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:18.123 [2024-11-04 14:58:47.897631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:18.123 [2024-11-04 14:58:47.913967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.123 14:58:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:18.123 [2024-11-04 14:58:47.916742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.057 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:19.058 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.316 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:19.316 "name": "raid_bdev1", 00:26:19.316 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:19.316 "strip_size_kb": 0, 00:26:19.316 "state": "online", 00:26:19.316 "raid_level": "raid1", 00:26:19.316 "superblock": true, 00:26:19.316 "num_base_bdevs": 2, 00:26:19.316 "num_base_bdevs_discovered": 2, 00:26:19.316 "num_base_bdevs_operational": 2, 00:26:19.316 "process": { 00:26:19.316 "type": "rebuild", 00:26:19.316 "target": "spare", 00:26:19.316 "progress": { 00:26:19.316 "blocks": 2560, 00:26:19.316 "percent": 32 00:26:19.316 } 00:26:19.316 }, 00:26:19.316 "base_bdevs_list": [ 00:26:19.316 { 00:26:19.316 "name": "spare", 00:26:19.316 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:19.316 "is_configured": true, 00:26:19.316 "data_offset": 256, 00:26:19.316 "data_size": 7936 00:26:19.316 }, 00:26:19.316 { 00:26:19.316 "name": "BaseBdev2", 00:26:19.316 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:19.316 "is_configured": true, 00:26:19.316 "data_offset": 256, 00:26:19.316 "data_size": 7936 00:26:19.316 } 00:26:19.316 ] 00:26:19.316 }' 00:26:19.316 14:58:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:26:19.316 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=811 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:19.316 "name": "raid_bdev1", 00:26:19.316 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:19.316 "strip_size_kb": 0, 00:26:19.316 "state": "online", 00:26:19.316 "raid_level": "raid1", 00:26:19.316 "superblock": true, 00:26:19.316 "num_base_bdevs": 2, 00:26:19.316 "num_base_bdevs_discovered": 2, 00:26:19.316 "num_base_bdevs_operational": 2, 00:26:19.316 "process": { 00:26:19.316 "type": "rebuild", 00:26:19.316 "target": "spare", 00:26:19.316 "progress": { 00:26:19.316 "blocks": 2816, 00:26:19.316 "percent": 35 00:26:19.316 } 00:26:19.316 }, 00:26:19.316 "base_bdevs_list": [ 00:26:19.316 { 00:26:19.316 "name": "spare", 00:26:19.316 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:19.316 "is_configured": true, 00:26:19.316 "data_offset": 256, 00:26:19.316 "data_size": 7936 00:26:19.316 }, 00:26:19.316 { 00:26:19.316 "name": "BaseBdev2", 00:26:19.316 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:19.316 "is_configured": true, 00:26:19.316 "data_offset": 256, 00:26:19.316 "data_size": 7936 00:26:19.316 } 00:26:19.316 ] 00:26:19.316 }' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.316 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:19.575 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.575 14:58:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:20.509 "name": "raid_bdev1", 00:26:20.509 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:20.509 "strip_size_kb": 0, 00:26:20.509 "state": "online", 00:26:20.509 "raid_level": "raid1", 00:26:20.509 "superblock": true, 00:26:20.509 "num_base_bdevs": 2, 00:26:20.509 "num_base_bdevs_discovered": 2, 00:26:20.509 "num_base_bdevs_operational": 2, 00:26:20.509 "process": { 00:26:20.509 "type": "rebuild", 00:26:20.509 "target": "spare", 00:26:20.509 "progress": { 00:26:20.509 "blocks": 5888, 00:26:20.509 "percent": 74 00:26:20.509 } 00:26:20.509 }, 00:26:20.509 "base_bdevs_list": [ 00:26:20.509 { 00:26:20.509 "name": "spare", 00:26:20.509 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:20.509 "is_configured": true, 00:26:20.509 "data_offset": 256, 00:26:20.509 "data_size": 7936 00:26:20.509 }, 00:26:20.509 { 00:26:20.509 "name": "BaseBdev2", 00:26:20.509 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:20.509 "is_configured": true, 00:26:20.509 "data_offset": 256, 00:26:20.509 "data_size": 7936 00:26:20.509 } 00:26:20.509 ] 00:26:20.509 }' 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:20.509 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:20.768 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:20.768 14:58:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:21.335 [2024-11-04 14:58:51.043371] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:21.335 [2024-11-04 14:58:51.043472] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:21.335 [2024-11-04 14:58:51.043619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:21.594 "name": "raid_bdev1", 00:26:21.594 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:21.594 "strip_size_kb": 0, 00:26:21.594 "state": "online", 00:26:21.594 "raid_level": "raid1", 00:26:21.594 "superblock": true, 00:26:21.594 "num_base_bdevs": 2, 00:26:21.594 "num_base_bdevs_discovered": 2, 00:26:21.594 "num_base_bdevs_operational": 2, 00:26:21.594 "base_bdevs_list": [ 00:26:21.594 { 00:26:21.594 "name": "spare", 00:26:21.594 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:21.594 "is_configured": true, 00:26:21.594 "data_offset": 256, 00:26:21.594 "data_size": 7936 00:26:21.594 }, 00:26:21.594 { 00:26:21.594 "name": "BaseBdev2", 00:26:21.594 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:21.594 "is_configured": true, 00:26:21.594 "data_offset": 256, 00:26:21.594 "data_size": 7936 00:26:21.594 } 00:26:21.594 ] 00:26:21.594 }' 00:26:21.594 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:21.853 "name": "raid_bdev1", 00:26:21.853 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:21.853 "strip_size_kb": 0, 00:26:21.853 "state": "online", 00:26:21.853 "raid_level": "raid1", 00:26:21.853 "superblock": true, 00:26:21.853 "num_base_bdevs": 2, 00:26:21.853 "num_base_bdevs_discovered": 2, 00:26:21.853 "num_base_bdevs_operational": 2, 00:26:21.853 "base_bdevs_list": [ 00:26:21.853 { 00:26:21.853 "name": "spare", 00:26:21.853 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:21.853 "is_configured": true, 00:26:21.853 "data_offset": 256, 00:26:21.853 "data_size": 7936 00:26:21.853 }, 00:26:21.853 { 00:26:21.853 "name": "BaseBdev2", 00:26:21.853 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:21.853 "is_configured": true, 00:26:21.853 "data_offset": 256, 00:26:21.853 "data_size": 7936 00:26:21.853 } 00:26:21.853 ] 00:26:21.853 }' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:21.853 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.111 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.111 "name": "raid_bdev1", 00:26:22.111 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:22.111 "strip_size_kb": 0, 00:26:22.111 "state": "online", 00:26:22.111 "raid_level": "raid1", 00:26:22.111 "superblock": true, 00:26:22.111 "num_base_bdevs": 2, 00:26:22.111 "num_base_bdevs_discovered": 2, 00:26:22.111 "num_base_bdevs_operational": 2, 00:26:22.111 "base_bdevs_list": [ 00:26:22.111 { 00:26:22.111 "name": "spare", 00:26:22.111 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:22.111 "is_configured": true, 00:26:22.111 "data_offset": 256, 00:26:22.111 "data_size": 7936 00:26:22.111 }, 00:26:22.111 { 00:26:22.111 "name": "BaseBdev2", 00:26:22.112 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:22.112 "is_configured": true, 00:26:22.112 "data_offset": 256, 00:26:22.112 "data_size": 7936 00:26:22.112 } 00:26:22.112 ] 00:26:22.112 }' 00:26:22.112 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.112 14:58:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.370 [2024-11-04 14:58:52.219990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.370 [2024-11-04 14:58:52.220048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.370 [2024-11-04 14:58:52.220157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.370 [2024-11-04 14:58:52.220302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.370 [2024-11-04 14:58:52.220322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.370 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.629 [2024-11-04 14:58:52.280007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:22.629 [2024-11-04 14:58:52.280098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.629 [2024-11-04 14:58:52.280132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:22.629 [2024-11-04 14:58:52.280148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.629 [2024-11-04 14:58:52.282965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.629 [2024-11-04 14:58:52.283000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:22.629 [2024-11-04 14:58:52.283089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:22.629 [2024-11-04 14:58:52.283153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:22.629 [2024-11-04 14:58:52.283361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:22.629 spare 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.629 [2024-11-04 14:58:52.383471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:22.629 [2024-11-04 14:58:52.383520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:22.629 [2024-11-04 14:58:52.383626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:22.629 [2024-11-04 14:58:52.383730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:22.629 [2024-11-04 14:58:52.383759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:22.629 [2024-11-04 14:58:52.383883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.629 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.629 "name": "raid_bdev1", 00:26:22.629 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:22.629 "strip_size_kb": 0, 00:26:22.629 "state": "online", 00:26:22.629 "raid_level": "raid1", 00:26:22.629 "superblock": true, 00:26:22.629 "num_base_bdevs": 2, 00:26:22.629 "num_base_bdevs_discovered": 2, 00:26:22.629 "num_base_bdevs_operational": 2, 00:26:22.629 "base_bdevs_list": [ 00:26:22.629 { 00:26:22.629 "name": "spare", 00:26:22.629 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:22.629 "is_configured": true, 00:26:22.629 "data_offset": 256, 00:26:22.629 "data_size": 7936 00:26:22.629 }, 00:26:22.629 { 00:26:22.629 "name": "BaseBdev2", 00:26:22.629 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:22.629 "is_configured": true, 00:26:22.629 "data_offset": 256, 00:26:22.629 "data_size": 7936 00:26:22.629 } 00:26:22.629 ] 00:26:22.629 }' 00:26:22.630 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.630 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:23.195 "name": "raid_bdev1", 00:26:23.195 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:23.195 "strip_size_kb": 0, 00:26:23.195 "state": "online", 00:26:23.195 "raid_level": "raid1", 00:26:23.195 "superblock": true, 00:26:23.195 "num_base_bdevs": 2, 00:26:23.195 "num_base_bdevs_discovered": 2, 00:26:23.195 "num_base_bdevs_operational": 2, 00:26:23.195 "base_bdevs_list": [ 00:26:23.195 { 00:26:23.195 "name": "spare", 00:26:23.195 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:23.195 "is_configured": true, 00:26:23.195 "data_offset": 256, 00:26:23.195 "data_size": 7936 00:26:23.195 }, 00:26:23.195 { 00:26:23.195 "name": "BaseBdev2", 00:26:23.195 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:23.195 "is_configured": true, 00:26:23.195 "data_offset": 256, 00:26:23.195 "data_size": 7936 00:26:23.195 } 00:26:23.195 ] 00:26:23.195 }' 00:26:23.195 14:58:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:23.195 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 [2024-11-04 14:58:53.100414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.454 "name": "raid_bdev1", 00:26:23.454 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:23.454 "strip_size_kb": 0, 00:26:23.454 "state": "online", 00:26:23.454 "raid_level": "raid1", 00:26:23.454 "superblock": true, 00:26:23.454 "num_base_bdevs": 2, 00:26:23.454 "num_base_bdevs_discovered": 1, 00:26:23.454 "num_base_bdevs_operational": 1, 00:26:23.454 "base_bdevs_list": [ 00:26:23.454 { 00:26:23.454 "name": null, 00:26:23.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.454 "is_configured": false, 00:26:23.454 "data_offset": 0, 00:26:23.454 "data_size": 7936 00:26:23.454 }, 00:26:23.454 { 00:26:23.454 "name": "BaseBdev2", 00:26:23.454 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:23.454 "is_configured": true, 00:26:23.454 "data_offset": 256, 00:26:23.454 "data_size": 7936 00:26:23.454 } 00:26:23.454 ] 00:26:23.454 }' 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.454 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:24.020 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:24.020 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.020 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:24.020 [2024-11-04 14:58:53.672660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:24.020 [2024-11-04 14:58:53.672949] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:24.020 [2024-11-04 14:58:53.672974] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:24.020 [2024-11-04 14:58:53.673040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:24.020 [2024-11-04 14:58:53.688973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:24.020 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.020 14:58:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:24.020 [2024-11-04 14:58:53.691717] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:24.955 "name": "raid_bdev1", 00:26:24.955 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:24.955 "strip_size_kb": 0, 00:26:24.955 "state": "online", 00:26:24.955 "raid_level": "raid1", 00:26:24.955 "superblock": true, 00:26:24.955 "num_base_bdevs": 2, 00:26:24.955 "num_base_bdevs_discovered": 2, 00:26:24.955 "num_base_bdevs_operational": 2, 00:26:24.955 "process": { 00:26:24.955 "type": "rebuild", 00:26:24.955 "target": "spare", 00:26:24.955 "progress": { 00:26:24.955 "blocks": 2560, 00:26:24.955 "percent": 32 00:26:24.955 } 00:26:24.955 }, 00:26:24.955 "base_bdevs_list": [ 00:26:24.955 { 00:26:24.955 "name": "spare", 00:26:24.955 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:24.955 "is_configured": true, 00:26:24.955 "data_offset": 256, 00:26:24.955 "data_size": 7936 00:26:24.955 }, 00:26:24.955 { 00:26:24.955 "name": "BaseBdev2", 00:26:24.955 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:24.955 "is_configured": true, 00:26:24.955 "data_offset": 256, 00:26:24.955 "data_size": 7936 00:26:24.955 } 00:26:24.955 ] 00:26:24.955 }' 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:24.955 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:25.214 [2024-11-04 14:58:54.866302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:25.214 [2024-11-04 14:58:54.902154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:25.214 [2024-11-04 14:58:54.902277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.214 [2024-11-04 14:58:54.902316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:25.214 [2024-11-04 14:58:54.902331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.214 "name": "raid_bdev1", 00:26:25.214 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:25.214 "strip_size_kb": 0, 00:26:25.214 "state": "online", 00:26:25.214 "raid_level": "raid1", 00:26:25.214 "superblock": true, 00:26:25.214 "num_base_bdevs": 2, 00:26:25.214 "num_base_bdevs_discovered": 1, 00:26:25.214 "num_base_bdevs_operational": 1, 00:26:25.214 "base_bdevs_list": [ 00:26:25.214 { 00:26:25.214 "name": null, 00:26:25.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.214 "is_configured": false, 00:26:25.214 "data_offset": 0, 00:26:25.214 "data_size": 7936 00:26:25.214 }, 00:26:25.214 { 00:26:25.214 "name": "BaseBdev2", 00:26:25.214 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:25.214 "is_configured": true, 00:26:25.214 "data_offset": 256, 00:26:25.214 "data_size": 7936 00:26:25.214 } 00:26:25.214 ] 00:26:25.214 }' 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.214 14:58:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:25.781 14:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:25.781 14:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.781 14:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:25.781 [2024-11-04 14:58:55.447481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:25.781 [2024-11-04 14:58:55.447742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.781 [2024-11-04 14:58:55.447815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:25.781 [2024-11-04 14:58:55.448070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.781 [2024-11-04 14:58:55.448429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.781 [2024-11-04 14:58:55.448597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:25.781 [2024-11-04 14:58:55.448739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:25.782 [2024-11-04 14:58:55.448763] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:25.782 [2024-11-04 14:58:55.448777] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:25.782 [2024-11-04 14:58:55.448819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.782 [2024-11-04 14:58:55.464822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:25.782 spare 00:26:25.782 14:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.782 14:58:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:25.782 [2024-11-04 14:58:55.467746] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:26.717 "name": "raid_bdev1", 00:26:26.717 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:26.717 "strip_size_kb": 0, 00:26:26.717 "state": "online", 00:26:26.717 "raid_level": "raid1", 00:26:26.717 "superblock": true, 00:26:26.717 "num_base_bdevs": 2, 00:26:26.717 "num_base_bdevs_discovered": 2, 00:26:26.717 "num_base_bdevs_operational": 2, 00:26:26.717 "process": { 00:26:26.717 "type": "rebuild", 00:26:26.717 "target": "spare", 00:26:26.717 "progress": { 00:26:26.717 "blocks": 2560, 00:26:26.717 "percent": 32 00:26:26.717 } 00:26:26.717 }, 00:26:26.717 "base_bdevs_list": [ 00:26:26.717 { 00:26:26.717 "name": "spare", 00:26:26.717 "uuid": "2fc4fedd-541f-5731-af8f-b5d3ef9a7864", 00:26:26.717 "is_configured": true, 00:26:26.717 "data_offset": 256, 00:26:26.717 "data_size": 7936 00:26:26.717 }, 00:26:26.717 { 00:26:26.717 "name": "BaseBdev2", 00:26:26.717 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:26.717 "is_configured": true, 00:26:26.717 "data_offset": 256, 00:26:26.717 "data_size": 7936 00:26:26.717 } 00:26:26.717 ] 00:26:26.717 }' 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:26.717 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:26.975 [2024-11-04 14:58:56.633390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:26.975 [2024-11-04 14:58:56.678501] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:26.975 [2024-11-04 14:58:56.678585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.975 [2024-11-04 14:58:56.678611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:26.975 [2024-11-04 14:58:56.678622] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.975 "name": "raid_bdev1", 00:26:26.975 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:26.975 "strip_size_kb": 0, 00:26:26.975 "state": "online", 00:26:26.975 "raid_level": "raid1", 00:26:26.975 "superblock": true, 00:26:26.975 "num_base_bdevs": 2, 00:26:26.975 "num_base_bdevs_discovered": 1, 00:26:26.975 "num_base_bdevs_operational": 1, 00:26:26.975 "base_bdevs_list": [ 00:26:26.975 { 00:26:26.975 "name": null, 00:26:26.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.975 "is_configured": false, 00:26:26.975 "data_offset": 0, 00:26:26.975 "data_size": 7936 00:26:26.975 }, 00:26:26.975 { 00:26:26.975 "name": "BaseBdev2", 00:26:26.975 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:26.975 "is_configured": true, 00:26:26.975 "data_offset": 256, 00:26:26.975 "data_size": 7936 00:26:26.975 } 00:26:26.975 ] 00:26:26.975 }' 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.975 14:58:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:27.542 "name": "raid_bdev1", 00:26:27.542 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:27.542 "strip_size_kb": 0, 00:26:27.542 "state": "online", 00:26:27.542 "raid_level": "raid1", 00:26:27.542 "superblock": true, 00:26:27.542 "num_base_bdevs": 2, 00:26:27.542 "num_base_bdevs_discovered": 1, 00:26:27.542 "num_base_bdevs_operational": 1, 00:26:27.542 "base_bdevs_list": [ 00:26:27.542 { 00:26:27.542 "name": null, 00:26:27.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.542 "is_configured": false, 00:26:27.542 "data_offset": 0, 00:26:27.542 "data_size": 7936 00:26:27.542 }, 00:26:27.542 { 00:26:27.542 "name": "BaseBdev2", 00:26:27.542 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:27.542 "is_configured": true, 00:26:27.542 "data_offset": 256, 00:26:27.542 "data_size": 7936 00:26:27.542 } 00:26:27.542 ] 00:26:27.542 }' 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.542 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:27.543 [2024-11-04 14:58:57.406369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:27.543 [2024-11-04 14:58:57.406436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.543 [2024-11-04 14:58:57.406474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:27.543 [2024-11-04 14:58:57.406489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.543 [2024-11-04 14:58:57.406748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.543 [2024-11-04 14:58:57.406768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:27.543 [2024-11-04 14:58:57.406835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:27.543 [2024-11-04 14:58:57.406854] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:27.543 [2024-11-04 14:58:57.406868] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:27.543 [2024-11-04 14:58:57.406882] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:27.543 BaseBdev1 00:26:27.543 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.543 14:58:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.918 "name": "raid_bdev1", 00:26:28.918 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:28.918 "strip_size_kb": 0, 00:26:28.918 "state": "online", 00:26:28.918 "raid_level": "raid1", 00:26:28.918 "superblock": true, 00:26:28.918 "num_base_bdevs": 2, 00:26:28.918 "num_base_bdevs_discovered": 1, 00:26:28.918 "num_base_bdevs_operational": 1, 00:26:28.918 "base_bdevs_list": [ 00:26:28.918 { 00:26:28.918 "name": null, 00:26:28.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.918 "is_configured": false, 00:26:28.918 "data_offset": 0, 00:26:28.918 "data_size": 7936 00:26:28.918 }, 00:26:28.918 { 00:26:28.918 "name": "BaseBdev2", 00:26:28.918 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:28.918 "is_configured": true, 00:26:28.918 "data_offset": 256, 00:26:28.918 "data_size": 7936 00:26:28.918 } 00:26:28.918 ] 00:26:28.918 }' 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.918 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.193 14:58:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:29.193 "name": "raid_bdev1", 00:26:29.193 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:29.193 "strip_size_kb": 0, 00:26:29.193 "state": "online", 00:26:29.193 "raid_level": "raid1", 00:26:29.193 "superblock": true, 00:26:29.193 "num_base_bdevs": 2, 00:26:29.193 "num_base_bdevs_discovered": 1, 00:26:29.193 "num_base_bdevs_operational": 1, 00:26:29.193 "base_bdevs_list": [ 00:26:29.193 { 00:26:29.193 "name": null, 00:26:29.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.193 "is_configured": false, 00:26:29.193 "data_offset": 0, 00:26:29.193 "data_size": 7936 00:26:29.193 }, 00:26:29.193 { 00:26:29.193 "name": "BaseBdev2", 00:26:29.193 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:29.193 "is_configured": true, 00:26:29.193 "data_offset": 256, 00:26:29.193 "data_size": 7936 00:26:29.193 } 00:26:29.193 ] 00:26:29.193 }' 00:26:29.193 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:29.193 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:29.193 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 [2024-11-04 14:58:59.114971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:29.460 [2024-11-04 14:58:59.115176] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:29.460 [2024-11-04 14:58:59.115201] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:29.460 request: 00:26:29.460 { 00:26:29.460 "base_bdev": "BaseBdev1", 00:26:29.460 "raid_bdev": "raid_bdev1", 00:26:29.460 "method": "bdev_raid_add_base_bdev", 00:26:29.460 "req_id": 1 00:26:29.460 } 00:26:29.460 Got JSON-RPC error response 00:26:29.460 response: 00:26:29.460 { 00:26:29.460 "code": -22, 00:26:29.460 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:29.460 } 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:29.460 14:58:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.396 "name": "raid_bdev1", 00:26:30.396 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:30.396 "strip_size_kb": 0, 00:26:30.396 "state": "online", 00:26:30.396 "raid_level": "raid1", 00:26:30.396 "superblock": true, 00:26:30.396 "num_base_bdevs": 2, 00:26:30.396 "num_base_bdevs_discovered": 1, 00:26:30.396 "num_base_bdevs_operational": 1, 00:26:30.396 "base_bdevs_list": [ 00:26:30.396 { 00:26:30.396 "name": null, 00:26:30.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.396 "is_configured": false, 00:26:30.396 "data_offset": 0, 00:26:30.396 "data_size": 7936 00:26:30.396 }, 00:26:30.396 { 00:26:30.396 "name": "BaseBdev2", 00:26:30.396 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:30.396 "is_configured": true, 00:26:30.396 "data_offset": 256, 00:26:30.396 "data_size": 7936 00:26:30.396 } 00:26:30.396 ] 00:26:30.396 }' 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.396 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:30.963 "name": "raid_bdev1", 00:26:30.963 "uuid": "1ad66044-c0f8-4069-b9a8-aa2d6be4c5a3", 00:26:30.963 "strip_size_kb": 0, 00:26:30.963 "state": "online", 00:26:30.963 "raid_level": "raid1", 00:26:30.963 "superblock": true, 00:26:30.963 "num_base_bdevs": 2, 00:26:30.963 "num_base_bdevs_discovered": 1, 00:26:30.963 "num_base_bdevs_operational": 1, 00:26:30.963 "base_bdevs_list": [ 00:26:30.963 { 00:26:30.963 "name": null, 00:26:30.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.963 "is_configured": false, 00:26:30.963 "data_offset": 0, 00:26:30.963 "data_size": 7936 00:26:30.963 }, 00:26:30.963 { 00:26:30.963 "name": "BaseBdev2", 00:26:30.963 "uuid": "9f787d5a-cc37-52c0-b46f-c16ac9fdb52c", 00:26:30.963 "is_configured": true, 00:26:30.963 "data_offset": 256, 00:26:30.963 "data_size": 7936 00:26:30.963 } 00:26:30.963 ] 00:26:30.963 }' 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89654 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89654 ']' 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89654 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:30.963 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89654 00:26:31.223 killing process with pid 89654 00:26:31.223 Received shutdown signal, test time was about 60.000000 seconds 00:26:31.223 00:26:31.223 Latency(us) 00:26:31.223 [2024-11-04T14:59:01.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.223 [2024-11-04T14:59:01.115Z] =================================================================================================================== 00:26:31.223 [2024-11-04T14:59:01.115Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:31.223 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:31.223 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:31.223 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89654' 00:26:31.223 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89654 00:26:31.223 [2024-11-04 14:59:00.871851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:31.223 14:59:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89654 00:26:31.223 [2024-11-04 14:59:00.872019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:31.223 [2024-11-04 14:59:00.872083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:31.223 [2024-11-04 14:59:00.872116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:31.482 [2024-11-04 14:59:01.132559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:32.418 14:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:26:32.418 00:26:32.418 real 0m18.553s 00:26:32.418 user 0m25.396s 00:26:32.418 sys 0m1.449s 00:26:32.418 ************************************ 00:26:32.418 END TEST raid_rebuild_test_sb_md_interleaved 00:26:32.418 ************************************ 00:26:32.418 14:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:32.418 14:59:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:32.418 14:59:02 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:26:32.418 14:59:02 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:26:32.418 14:59:02 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89654 ']' 00:26:32.418 14:59:02 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89654 00:26:32.418 14:59:02 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:26:32.418 00:26:32.418 real 13m13.492s 00:26:32.418 user 18m31.002s 00:26:32.418 sys 1m53.287s 00:26:32.418 14:59:02 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:32.418 ************************************ 00:26:32.418 END TEST bdev_raid 00:26:32.418 14:59:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:32.418 ************************************ 00:26:32.418 14:59:02 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:32.418 14:59:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:32.418 14:59:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:32.418 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:26:32.418 ************************************ 00:26:32.418 START TEST spdkcli_raid 00:26:32.418 ************************************ 00:26:32.418 14:59:02 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:32.418 * Looking for test storage... 00:26:32.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:32.418 14:59:02 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.677 14:59:02 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:32.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.677 --rc genhtml_branch_coverage=1 00:26:32.677 --rc genhtml_function_coverage=1 00:26:32.677 --rc genhtml_legend=1 00:26:32.677 --rc geninfo_all_blocks=1 00:26:32.677 --rc geninfo_unexecuted_blocks=1 00:26:32.677 00:26:32.677 ' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:32.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.677 --rc genhtml_branch_coverage=1 00:26:32.677 --rc genhtml_function_coverage=1 00:26:32.677 --rc genhtml_legend=1 00:26:32.677 --rc geninfo_all_blocks=1 00:26:32.677 --rc geninfo_unexecuted_blocks=1 00:26:32.677 00:26:32.677 ' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:32.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.677 --rc genhtml_branch_coverage=1 00:26:32.677 --rc genhtml_function_coverage=1 00:26:32.677 --rc genhtml_legend=1 00:26:32.677 --rc geninfo_all_blocks=1 00:26:32.677 --rc geninfo_unexecuted_blocks=1 00:26:32.677 00:26:32.677 ' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:32.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.677 --rc genhtml_branch_coverage=1 00:26:32.677 --rc genhtml_function_coverage=1 00:26:32.677 --rc genhtml_legend=1 00:26:32.677 --rc geninfo_all_blocks=1 00:26:32.677 --rc geninfo_unexecuted_blocks=1 00:26:32.677 00:26:32.677 ' 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:26:32.677 14:59:02 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:32.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90336 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90336 00:26:32.677 14:59:02 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90336 ']' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.677 14:59:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:32.936 [2024-11-04 14:59:02.581387] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:32.936 [2024-11-04 14:59:02.581768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90336 ] 00:26:32.936 [2024-11-04 14:59:02.770551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:33.195 [2024-11-04 14:59:02.900174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.195 [2024-11-04 14:59:02.900194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.130 14:59:03 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:34.130 14:59:03 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:26:34.130 14:59:03 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:26:34.130 14:59:03 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:34.130 14:59:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:34.130 14:59:03 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:26:34.130 14:59:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.130 14:59:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:34.130 14:59:03 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:34.130 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:34.130 ' 00:26:36.032 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:26:36.032 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:26:36.032 14:59:05 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:26:36.032 14:59:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.032 14:59:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.032 14:59:05 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:26:36.032 14:59:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.032 14:59:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.032 14:59:05 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:26:36.032 ' 00:26:36.969 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:26:36.969 14:59:06 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:26:36.969 14:59:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.969 14:59:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.969 14:59:06 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:26:36.969 14:59:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.969 14:59:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.969 14:59:06 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:26:36.969 14:59:06 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:26:37.535 14:59:07 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:26:37.793 14:59:07 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:26:37.793 14:59:07 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:26:37.793 14:59:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:37.793 14:59:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:37.793 14:59:07 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:26:37.793 14:59:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:37.793 14:59:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:37.793 14:59:07 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:26:37.793 ' 00:26:38.728 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:26:38.728 14:59:08 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:26:38.728 14:59:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:38.728 14:59:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:38.986 14:59:08 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:26:38.986 14:59:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.986 14:59:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:38.986 14:59:08 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:26:38.986 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:26:38.986 ' 00:26:40.361 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:26:40.361 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:26:40.361 14:59:10 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:40.361 14:59:10 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90336 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90336 ']' 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90336 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90336 00:26:40.361 killing process with pid 90336 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90336' 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90336 00:26:40.361 14:59:10 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90336 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90336 ']' 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90336 00:26:42.918 14:59:12 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90336 ']' 00:26:42.918 Process with pid 90336 is not found 00:26:42.918 14:59:12 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90336 00:26:42.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90336) - No such process 00:26:42.918 14:59:12 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90336 is not found' 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:42.918 14:59:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:42.918 ************************************ 00:26:42.918 END TEST spdkcli_raid 00:26:42.918 ************************************ 00:26:42.918 00:26:42.918 real 0m10.182s 00:26:42.918 user 0m20.955s 00:26:42.918 sys 0m1.283s 00:26:42.918 14:59:12 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.918 14:59:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:42.918 14:59:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:42.918 14:59:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:42.918 14:59:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.918 14:59:12 -- common/autotest_common.sh@10 -- # set +x 00:26:42.918 ************************************ 00:26:42.918 START TEST blockdev_raid5f 00:26:42.918 ************************************ 00:26:42.918 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:42.918 * Looking for test storage... 00:26:42.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:42.918 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:42.918 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:26:42.918 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:42.918 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.918 14:59:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.919 14:59:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.919 --rc genhtml_branch_coverage=1 00:26:42.919 --rc genhtml_function_coverage=1 00:26:42.919 --rc genhtml_legend=1 00:26:42.919 --rc geninfo_all_blocks=1 00:26:42.919 --rc geninfo_unexecuted_blocks=1 00:26:42.919 00:26:42.919 ' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.919 --rc genhtml_branch_coverage=1 00:26:42.919 --rc genhtml_function_coverage=1 00:26:42.919 --rc genhtml_legend=1 00:26:42.919 --rc geninfo_all_blocks=1 00:26:42.919 --rc geninfo_unexecuted_blocks=1 00:26:42.919 00:26:42.919 ' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.919 --rc genhtml_branch_coverage=1 00:26:42.919 --rc genhtml_function_coverage=1 00:26:42.919 --rc genhtml_legend=1 00:26:42.919 --rc geninfo_all_blocks=1 00:26:42.919 --rc geninfo_unexecuted_blocks=1 00:26:42.919 00:26:42.919 ' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.919 --rc genhtml_branch_coverage=1 00:26:42.919 --rc genhtml_function_coverage=1 00:26:42.919 --rc genhtml_legend=1 00:26:42.919 --rc geninfo_all_blocks=1 00:26:42.919 --rc geninfo_unexecuted_blocks=1 00:26:42.919 00:26:42.919 ' 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90611 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90611 00:26:42.919 14:59:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90611 ']' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.919 14:59:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:42.919 [2024-11-04 14:59:12.807762] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:42.919 [2024-11-04 14:59:12.807986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90611 ] 00:26:43.177 [2024-11-04 14:59:12.996849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.435 [2024-11-04 14:59:13.127035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.369 14:59:13 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:44.369 14:59:13 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:26:44.369 14:59:13 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:26:44.369 14:59:13 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:26:44.369 14:59:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:26:44.369 14:59:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.369 14:59:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:44.369 Malloc0 00:26:44.369 Malloc1 00:26:44.369 Malloc2 00:26:44.369 14:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.369 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:26:44.369 14:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.369 14:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:44.369 14:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:44.370 14:59:14 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:26:44.370 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7d73b328-da95-4122-ad76-6122290cb0eb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d73b328-da95-4122-ad76-6122290cb0eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d73b328-da95-4122-ad76-6122290cb0eb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "735321b1-a870-4fea-a449-3df16b14ed7a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "28e0a93e-1883-42ff-8b80-62c117b63bcd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "03ccd49b-669e-47d9-b8ae-28f3fe1e31f7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:44.629 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:26:44.629 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:26:44.629 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:26:44.629 14:59:14 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90611 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90611 ']' 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90611 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90611 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:44.629 killing process with pid 90611 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90611' 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90611 00:26:44.629 14:59:14 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90611 00:26:47.160 14:59:16 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:47.160 14:59:16 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:47.160 14:59:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:26:47.160 14:59:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:47.160 14:59:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:47.160 ************************************ 00:26:47.160 START TEST bdev_hello_world 00:26:47.160 ************************************ 00:26:47.160 14:59:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:47.160 [2024-11-04 14:59:16.737905] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:47.160 [2024-11-04 14:59:16.738094] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90673 ] 00:26:47.160 [2024-11-04 14:59:16.923597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.419 [2024-11-04 14:59:17.054138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.678 [2024-11-04 14:59:17.564652] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:47.678 [2024-11-04 14:59:17.564730] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:26:47.678 [2024-11-04 14:59:17.564769] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:47.678 [2024-11-04 14:59:17.565459] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:47.678 [2024-11-04 14:59:17.565743] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:47.678 [2024-11-04 14:59:17.565770] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:47.678 [2024-11-04 14:59:17.565839] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:47.678 00:26:47.678 [2024-11-04 14:59:17.565874] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:49.052 00:26:49.052 real 0m2.103s 00:26:49.052 user 0m1.632s 00:26:49.052 sys 0m0.350s 00:26:49.052 14:59:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:49.052 ************************************ 00:26:49.052 END TEST bdev_hello_world 00:26:49.052 ************************************ 00:26:49.052 14:59:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:49.052 14:59:18 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:26:49.052 14:59:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:49.052 14:59:18 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:49.052 14:59:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:49.052 ************************************ 00:26:49.052 START TEST bdev_bounds 00:26:49.052 ************************************ 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90715 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90715' 00:26:49.052 Process bdevio pid: 90715 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90715 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90715 ']' 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:49.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:49.052 14:59:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:49.052 [2024-11-04 14:59:18.895509] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:49.052 [2024-11-04 14:59:18.895732] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90715 ] 00:26:49.310 [2024-11-04 14:59:19.081437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:49.569 [2024-11-04 14:59:19.216583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.569 [2024-11-04 14:59:19.216700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.569 [2024-11-04 14:59:19.216716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.135 14:59:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:50.135 14:59:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:26:50.135 14:59:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:50.135 I/O targets: 00:26:50.135 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:26:50.135 00:26:50.135 00:26:50.135 CUnit - A unit testing framework for C - Version 2.1-3 00:26:50.135 http://cunit.sourceforge.net/ 00:26:50.135 00:26:50.135 00:26:50.135 Suite: bdevio tests on: raid5f 00:26:50.135 Test: blockdev write read block ...passed 00:26:50.135 Test: blockdev write zeroes read block ...passed 00:26:50.135 Test: blockdev write zeroes read no split ...passed 00:26:50.394 Test: blockdev write zeroes read split ...passed 00:26:50.394 Test: blockdev write zeroes read split partial ...passed 00:26:50.394 Test: blockdev reset ...passed 00:26:50.394 Test: blockdev write read 8 blocks ...passed 00:26:50.394 Test: blockdev write read size > 128k ...passed 00:26:50.394 Test: blockdev write read invalid size ...passed 00:26:50.394 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:50.394 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:50.394 Test: blockdev write read max offset ...passed 00:26:50.394 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:50.394 Test: blockdev writev readv 8 blocks ...passed 00:26:50.394 Test: blockdev writev readv 30 x 1block ...passed 00:26:50.394 Test: blockdev writev readv block ...passed 00:26:50.394 Test: blockdev writev readv size > 128k ...passed 00:26:50.394 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:50.394 Test: blockdev comparev and writev ...passed 00:26:50.394 Test: blockdev nvme passthru rw ...passed 00:26:50.394 Test: blockdev nvme passthru vendor specific ...passed 00:26:50.394 Test: blockdev nvme admin passthru ...passed 00:26:50.394 Test: blockdev copy ...passed 00:26:50.394 00:26:50.394 Run Summary: Type Total Ran Passed Failed Inactive 00:26:50.394 suites 1 1 n/a 0 0 00:26:50.394 tests 23 23 23 0 0 00:26:50.394 asserts 130 130 130 0 n/a 00:26:50.394 00:26:50.394 Elapsed time = 0.522 seconds 00:26:50.394 0 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90715 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90715 ']' 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90715 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90715 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:50.394 killing process with pid 90715 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90715' 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90715 00:26:50.394 14:59:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90715 00:26:51.768 14:59:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:51.768 00:26:51.768 real 0m2.711s 00:26:51.768 user 0m6.641s 00:26:51.768 sys 0m0.503s 00:26:51.768 ************************************ 00:26:51.768 END TEST bdev_bounds 00:26:51.768 ************************************ 00:26:51.768 14:59:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:51.768 14:59:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:51.768 14:59:21 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:51.768 14:59:21 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:51.768 14:59:21 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:51.768 14:59:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:51.768 ************************************ 00:26:51.768 START TEST bdev_nbd 00:26:51.768 ************************************ 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90775 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90775 /var/tmp/spdk-nbd.sock 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90775 ']' 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:51.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:51.768 14:59:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:52.026 [2024-11-04 14:59:21.668173] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:26:52.026 [2024-11-04 14:59:21.668401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.026 [2024-11-04 14:59:21.847803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.284 [2024-11-04 14:59:21.982320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:52.852 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:53.110 1+0 records in 00:26:53.110 1+0 records out 00:26:53.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377028 s, 10.9 MB/s 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:53.110 14:59:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:53.368 { 00:26:53.368 "nbd_device": "/dev/nbd0", 00:26:53.368 "bdev_name": "raid5f" 00:26:53.368 } 00:26:53.368 ]' 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:53.368 { 00:26:53.368 "nbd_device": "/dev/nbd0", 00:26:53.368 "bdev_name": "raid5f" 00:26:53.368 } 00:26:53.368 ]' 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.368 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:53.626 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:53.883 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:53.883 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:53.883 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:53.883 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:53.883 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:53.883 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:54.142 14:59:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:26:54.400 /dev/nbd0 00:26:54.400 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:54.400 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:54.401 1+0 records in 00:26:54.401 1+0 records out 00:26:54.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313996 s, 13.0 MB/s 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.401 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:54.659 { 00:26:54.659 "nbd_device": "/dev/nbd0", 00:26:54.659 "bdev_name": "raid5f" 00:26:54.659 } 00:26:54.659 ]' 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:54.659 { 00:26:54.659 "nbd_device": "/dev/nbd0", 00:26:54.659 "bdev_name": "raid5f" 00:26:54.659 } 00:26:54.659 ]' 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:54.659 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:54.659 256+0 records in 00:26:54.659 256+0 records out 00:26:54.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104154 s, 101 MB/s 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:54.660 256+0 records in 00:26:54.660 256+0 records out 00:26:54.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362519 s, 28.9 MB/s 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:54.660 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.918 14:59:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:55.177 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:55.177 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:55.177 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:26:55.435 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:55.693 malloc_lvol_verify 00:26:55.693 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:55.952 062fa7ab-c57c-4bb4-839a-20aaa6907f72 00:26:55.952 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:56.210 278621a1-82a0-4dd5-88f2-a9782080ffaf 00:26:56.210 14:59:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:56.468 /dev/nbd0 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:26:56.468 mke2fs 1.47.0 (5-Feb-2023) 00:26:56.468 Discarding device blocks: 0/4096 done 00:26:56.468 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:56.468 00:26:56.468 Allocating group tables: 0/1 done 00:26:56.468 Writing inode tables: 0/1 done 00:26:56.468 Creating journal (1024 blocks): done 00:26:56.468 Writing superblocks and filesystem accounting information: 0/1 done 00:26:56.468 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.468 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90775 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90775 ']' 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90775 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90775 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:56.727 killing process with pid 90775 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90775' 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90775 00:26:56.727 14:59:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90775 00:26:58.104 14:59:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:58.104 ************************************ 00:26:58.104 END TEST bdev_nbd 00:26:58.104 ************************************ 00:26:58.104 00:26:58.104 real 0m6.157s 00:26:58.104 user 0m8.782s 00:26:58.104 sys 0m1.416s 00:26:58.104 14:59:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:58.104 14:59:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 14:59:27 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:26:58.104 14:59:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:26:58.104 14:59:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:26:58.104 14:59:27 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:26:58.104 14:59:27 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:58.104 14:59:27 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:58.104 14:59:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 ************************************ 00:26:58.104 START TEST bdev_fio 00:26:58.104 ************************************ 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:58.104 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 ************************************ 00:26:58.104 START TEST bdev_fio_rw_verify 00:26:58.104 ************************************ 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:26:58.104 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:58.105 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:58.105 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:26:58.105 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:58.105 14:59:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:58.375 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:58.375 fio-3.35 00:26:58.375 Starting 1 thread 00:27:10.595 00:27:10.595 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90979: Mon Nov 4 14:59:39 2024 00:27:10.595 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(394MiB/10000msec) 00:27:10.595 slat (nsec): min=19611, max=86350, avg=23543.85, stdev=5078.59 00:27:10.595 clat (usec): min=12, max=406, avg=157.12, stdev=57.82 00:27:10.595 lat (usec): min=34, max=442, avg=180.66, stdev=58.80 00:27:10.595 clat percentiles (usec): 00:27:10.595 | 50.000th=[ 157], 99.000th=[ 285], 99.900th=[ 363], 99.990th=[ 388], 00:27:10.595 | 99.999th=[ 404] 00:27:10.595 write: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(408MiB/9873msec); 0 zone resets 00:27:10.595 slat (usec): min=9, max=285, avg=20.02, stdev= 5.74 00:27:10.595 clat (usec): min=62, max=1453, avg=364.73, stdev=55.91 00:27:10.595 lat (usec): min=79, max=1533, avg=384.75, stdev=57.76 00:27:10.595 clat percentiles (usec): 00:27:10.595 | 50.000th=[ 367], 99.000th=[ 529], 99.900th=[ 660], 99.990th=[ 1139], 00:27:10.595 | 99.999th=[ 1401] 00:27:10.595 bw ( KiB/s): min=38296, max=44496, per=98.54%, avg=41743.21, stdev=1732.94, samples=19 00:27:10.595 iops : min= 9574, max=11124, avg=10435.79, stdev=433.25, samples=19 00:27:10.595 lat (usec) : 20=0.01%, 50=0.01%, 100=10.24%, 250=37.22%, 500=51.71% 00:27:10.595 lat (usec) : 750=0.80%, 1000=0.02% 00:27:10.595 lat (msec) : 2=0.02% 00:27:10.595 cpu : usr=98.62%, sys=0.60%, ctx=22, majf=0, minf=8516 00:27:10.595 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.595 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.595 issued rwts: total=100906,104561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.595 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:10.595 00:27:10.595 Run status group 0 (all jobs): 00:27:10.595 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=394MiB (413MB), run=10000-10000msec 00:27:10.595 WRITE: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=408MiB (428MB), run=9873-9873msec 00:27:10.595 ----------------------------------------------------- 00:27:10.595 Suppressions used: 00:27:10.595 count bytes template 00:27:10.595 1 7 /usr/src/fio/parse.c 00:27:10.595 497 47712 /usr/src/fio/iolog.c 00:27:10.595 1 8 libtcmalloc_minimal.so 00:27:10.595 1 904 libcrypto.so 00:27:10.595 ----------------------------------------------------- 00:27:10.595 00:27:10.854 00:27:10.854 real 0m12.623s 00:27:10.854 user 0m12.880s 00:27:10.854 sys 0m0.842s 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:10.854 ************************************ 00:27:10.854 END TEST bdev_fio_rw_verify 00:27:10.854 ************************************ 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7d73b328-da95-4122-ad76-6122290cb0eb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d73b328-da95-4122-ad76-6122290cb0eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d73b328-da95-4122-ad76-6122290cb0eb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "735321b1-a870-4fea-a449-3df16b14ed7a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "28e0a93e-1883-42ff-8b80-62c117b63bcd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "03ccd49b-669e-47d9-b8ae-28f3fe1e31f7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:27:10.854 /home/vagrant/spdk_repo/spdk 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:27:10.854 00:27:10.854 real 0m12.852s 00:27:10.854 user 0m12.995s 00:27:10.854 sys 0m0.938s 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:10.854 14:59:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:10.854 ************************************ 00:27:10.854 END TEST bdev_fio 00:27:10.854 ************************************ 00:27:10.854 14:59:40 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:10.854 14:59:40 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:10.854 14:59:40 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:27:10.854 14:59:40 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:10.854 14:59:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:10.854 ************************************ 00:27:10.854 START TEST bdev_verify 00:27:10.854 ************************************ 00:27:10.854 14:59:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:11.112 [2024-11-04 14:59:40.749426] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:27:11.112 [2024-11-04 14:59:40.749646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91143 ] 00:27:11.112 [2024-11-04 14:59:40.918422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:11.371 [2024-11-04 14:59:41.047259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.371 [2024-11-04 14:59:41.047292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.938 Running I/O for 5 seconds... 00:27:13.808 11740.00 IOPS, 45.86 MiB/s [2024-11-04T14:59:44.636Z] 11976.00 IOPS, 46.78 MiB/s [2024-11-04T14:59:46.012Z] 12802.67 IOPS, 50.01 MiB/s [2024-11-04T14:59:46.947Z] 12511.75 IOPS, 48.87 MiB/s [2024-11-04T14:59:46.947Z] 12338.80 IOPS, 48.20 MiB/s 00:27:17.055 Latency(us) 00:27:17.055 [2024-11-04T14:59:46.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.056 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.056 Verification LBA range: start 0x0 length 0x2000 00:27:17.056 raid5f : 5.02 6213.61 24.27 0.00 0.00 31106.32 242.04 29074.15 00:27:17.056 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:17.056 Verification LBA range: start 0x2000 length 0x2000 00:27:17.056 raid5f : 5.02 6128.07 23.94 0.00 0.00 31323.44 247.62 30504.03 00:27:17.056 [2024-11-04T14:59:46.948Z] =================================================================================================================== 00:27:17.056 [2024-11-04T14:59:46.948Z] Total : 12341.68 48.21 0.00 0.00 31214.09 242.04 30504.03 00:27:18.431 00:27:18.431 real 0m7.257s 00:27:18.431 user 0m13.311s 00:27:18.431 sys 0m0.364s 00:27:18.431 14:59:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.431 14:59:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:18.431 ************************************ 00:27:18.431 END TEST bdev_verify 00:27:18.431 ************************************ 00:27:18.431 14:59:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:18.431 14:59:47 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:27:18.431 14:59:47 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.431 14:59:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:18.431 ************************************ 00:27:18.431 START TEST bdev_verify_big_io 00:27:18.431 ************************************ 00:27:18.431 14:59:47 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:18.431 [2024-11-04 14:59:48.084282] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:27:18.431 [2024-11-04 14:59:48.084470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91237 ] 00:27:18.431 [2024-11-04 14:59:48.268252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:18.690 [2024-11-04 14:59:48.393815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.690 [2024-11-04 14:59:48.393826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.255 Running I/O for 5 seconds... 00:27:21.597 693.00 IOPS, 43.31 MiB/s [2024-11-04T14:59:52.424Z] 665.00 IOPS, 41.56 MiB/s [2024-11-04T14:59:53.361Z] 676.67 IOPS, 42.29 MiB/s [2024-11-04T14:59:54.305Z] 713.00 IOPS, 44.56 MiB/s [2024-11-04T14:59:54.564Z] 685.40 IOPS, 42.84 MiB/s 00:27:24.672 Latency(us) 00:27:24.672 [2024-11-04T14:59:54.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.672 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:24.672 Verification LBA range: start 0x0 length 0x200 00:27:24.672 raid5f : 5.39 329.86 20.62 0.00 0.00 9706245.35 232.73 499503.48 00:27:24.672 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:24.672 Verification LBA range: start 0x200 length 0x200 00:27:24.672 raid5f : 5.35 355.83 22.24 0.00 0.00 8988633.74 322.09 455653.93 00:27:24.672 [2024-11-04T14:59:54.564Z] =================================================================================================================== 00:27:24.672 [2024-11-04T14:59:54.564Z] Total : 685.69 42.86 0.00 0.00 9335066.93 232.73 499503.48 00:27:26.048 00:27:26.048 real 0m7.942s 00:27:26.048 user 0m14.532s 00:27:26.048 sys 0m0.382s 00:27:26.048 14:59:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:26.048 14:59:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:26.048 ************************************ 00:27:26.048 END TEST bdev_verify_big_io 00:27:26.048 ************************************ 00:27:26.308 14:59:55 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:26.308 14:59:55 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:27:26.308 14:59:55 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:26.308 14:59:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:26.308 ************************************ 00:27:26.308 START TEST bdev_write_zeroes 00:27:26.308 ************************************ 00:27:26.308 14:59:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:26.308 [2024-11-04 14:59:56.062630] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:27:26.308 [2024-11-04 14:59:56.062825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91337 ] 00:27:26.567 [2024-11-04 14:59:56.240011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.567 [2024-11-04 14:59:56.394332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.131 Running I/O for 1 seconds... 00:27:28.506 22239.00 IOPS, 86.87 MiB/s 00:27:28.506 Latency(us) 00:27:28.506 [2024-11-04T14:59:58.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.506 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:28.506 raid5f : 1.01 22220.08 86.80 0.00 0.00 5740.36 2040.55 7745.16 00:27:28.506 [2024-11-04T14:59:58.398Z] =================================================================================================================== 00:27:28.506 [2024-11-04T14:59:58.398Z] Total : 22220.08 86.80 0.00 0.00 5740.36 2040.55 7745.16 00:27:29.442 00:27:29.442 real 0m3.315s 00:27:29.442 user 0m2.820s 00:27:29.442 sys 0m0.367s 00:27:29.442 14:59:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:29.442 14:59:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:29.442 ************************************ 00:27:29.442 END TEST bdev_write_zeroes 00:27:29.442 ************************************ 00:27:29.442 14:59:59 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:29.442 14:59:59 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:27:29.442 14:59:59 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:29.442 14:59:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:29.704 ************************************ 00:27:29.704 START TEST bdev_json_nonenclosed 00:27:29.704 ************************************ 00:27:29.704 14:59:59 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:29.704 [2024-11-04 14:59:59.444100] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:27:29.705 [2024-11-04 14:59:59.444313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91390 ] 00:27:29.974 [2024-11-04 14:59:59.623071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.974 [2024-11-04 14:59:59.768582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.974 [2024-11-04 14:59:59.768780] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:29.974 [2024-11-04 14:59:59.768824] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:29.974 [2024-11-04 14:59:59.768841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:30.239 00:27:30.239 real 0m0.722s 00:27:30.239 user 0m0.456s 00:27:30.239 sys 0m0.162s 00:27:30.239 15:00:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:30.239 15:00:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:30.239 ************************************ 00:27:30.239 END TEST bdev_json_nonenclosed 00:27:30.239 ************************************ 00:27:30.239 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:30.239 15:00:00 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:27:30.239 15:00:00 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:30.239 15:00:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:30.239 ************************************ 00:27:30.239 START TEST bdev_json_nonarray 00:27:30.239 ************************************ 00:27:30.239 15:00:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:30.498 [2024-11-04 15:00:00.226581] Starting SPDK v25.01-pre git sha1 361e7dfef / DPDK 24.03.0 initialization... 00:27:30.498 [2024-11-04 15:00:00.226779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91420 ] 00:27:30.757 [2024-11-04 15:00:00.408842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.757 [2024-11-04 15:00:00.542771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.757 [2024-11-04 15:00:00.542950] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:30.757 [2024-11-04 15:00:00.542980] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:30.757 [2024-11-04 15:00:00.543024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:31.015 00:27:31.015 real 0m0.685s 00:27:31.015 user 0m0.439s 00:27:31.015 sys 0m0.141s 00:27:31.015 15:00:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:31.015 15:00:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:31.015 ************************************ 00:27:31.015 END TEST bdev_json_nonarray 00:27:31.015 ************************************ 00:27:31.015 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:27:31.015 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:27:31.015 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:27:31.015 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:27:31.016 15:00:00 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:27:31.016 00:27:31.016 real 0m48.394s 00:27:31.016 user 1m5.724s 00:27:31.016 sys 0m5.697s 00:27:31.016 15:00:00 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:31.016 15:00:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:31.016 ************************************ 00:27:31.016 END TEST blockdev_raid5f 00:27:31.016 ************************************ 00:27:31.016 15:00:00 -- spdk/autotest.sh@194 -- # uname -s 00:27:31.016 15:00:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:27:31.016 15:00:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:31.016 15:00:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:31.016 15:00:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:27:31.016 15:00:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:27:31.016 15:00:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:27:31.016 15:00:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:31.016 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:27:31.274 15:00:00 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:31.274 15:00:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:27:31.274 15:00:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:31.274 15:00:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:31.274 15:00:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:27:31.274 15:00:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:27:31.274 15:00:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:27:31.274 15:00:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:31.274 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:27:31.274 15:00:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:27:31.274 15:00:00 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:27:31.274 15:00:00 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:27:31.274 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:27:33.176 INFO: APP EXITING 00:27:33.176 INFO: killing all VMs 00:27:33.176 INFO: killing vhost app 00:27:33.176 INFO: EXIT DONE 00:27:33.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:33.176 Waiting for block devices as requested 00:27:33.176 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:33.434 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:34.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:34.000 Cleaning 00:27:34.000 Removing: /var/run/dpdk/spdk0/config 00:27:34.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:34.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:34.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:34.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:34.000 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:34.000 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:34.000 Removing: /dev/shm/spdk_tgt_trace.pid56892 00:27:34.000 Removing: /var/run/dpdk/spdk0 00:27:34.000 Removing: /var/run/dpdk/spdk_pid56651 00:27:34.000 Removing: /var/run/dpdk/spdk_pid56892 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57121 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57225 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57281 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57410 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57438 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57637 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57754 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57861 00:27:34.258 Removing: /var/run/dpdk/spdk_pid57983 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58091 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58136 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58178 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58249 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58338 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58817 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58890 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58964 00:27:34.258 Removing: /var/run/dpdk/spdk_pid58991 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59142 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59164 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59323 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59339 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59408 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59432 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59496 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59514 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59715 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59751 00:27:34.258 Removing: /var/run/dpdk/spdk_pid59840 00:27:34.258 Removing: /var/run/dpdk/spdk_pid61235 00:27:34.258 Removing: /var/run/dpdk/spdk_pid61452 00:27:34.258 Removing: /var/run/dpdk/spdk_pid61598 00:27:34.258 Removing: /var/run/dpdk/spdk_pid62263 00:27:34.258 Removing: /var/run/dpdk/spdk_pid62475 00:27:34.258 Removing: /var/run/dpdk/spdk_pid62620 00:27:34.258 Removing: /var/run/dpdk/spdk_pid63280 00:27:34.258 Removing: /var/run/dpdk/spdk_pid63616 00:27:34.258 Removing: /var/run/dpdk/spdk_pid63756 00:27:34.258 Removing: /var/run/dpdk/spdk_pid65178 00:27:34.258 Removing: /var/run/dpdk/spdk_pid65440 00:27:34.258 Removing: /var/run/dpdk/spdk_pid65586 00:27:34.258 Removing: /var/run/dpdk/spdk_pid67010 00:27:34.258 Removing: /var/run/dpdk/spdk_pid67263 00:27:34.258 Removing: /var/run/dpdk/spdk_pid67414 00:27:34.258 Removing: /var/run/dpdk/spdk_pid68833 00:27:34.258 Removing: /var/run/dpdk/spdk_pid69291 00:27:34.258 Removing: /var/run/dpdk/spdk_pid69435 00:27:34.258 Removing: /var/run/dpdk/spdk_pid70964 00:27:34.258 Removing: /var/run/dpdk/spdk_pid71235 00:27:34.258 Removing: /var/run/dpdk/spdk_pid71386 00:27:34.258 Removing: /var/run/dpdk/spdk_pid72907 00:27:34.258 Removing: /var/run/dpdk/spdk_pid73178 00:27:34.258 Removing: /var/run/dpdk/spdk_pid73324 00:27:34.258 Removing: /var/run/dpdk/spdk_pid74840 00:27:34.258 Removing: /var/run/dpdk/spdk_pid75334 00:27:34.258 Removing: /var/run/dpdk/spdk_pid75480 00:27:34.258 Removing: /var/run/dpdk/spdk_pid75628 00:27:34.258 Removing: /var/run/dpdk/spdk_pid76081 00:27:34.258 Removing: /var/run/dpdk/spdk_pid76844 00:27:34.258 Removing: /var/run/dpdk/spdk_pid77231 00:27:34.258 Removing: /var/run/dpdk/spdk_pid77938 00:27:34.258 Removing: /var/run/dpdk/spdk_pid78418 00:27:34.258 Removing: /var/run/dpdk/spdk_pid79214 00:27:34.258 Removing: /var/run/dpdk/spdk_pid79630 00:27:34.258 Removing: /var/run/dpdk/spdk_pid81649 00:27:34.258 Removing: /var/run/dpdk/spdk_pid82094 00:27:34.258 Removing: /var/run/dpdk/spdk_pid82541 00:27:34.258 Removing: /var/run/dpdk/spdk_pid84676 00:27:34.258 Removing: /var/run/dpdk/spdk_pid85168 00:27:34.258 Removing: /var/run/dpdk/spdk_pid85673 00:27:34.258 Removing: /var/run/dpdk/spdk_pid86755 00:27:34.258 Removing: /var/run/dpdk/spdk_pid87078 00:27:34.258 Removing: /var/run/dpdk/spdk_pid88039 00:27:34.258 Removing: /var/run/dpdk/spdk_pid88368 00:27:34.258 Removing: /var/run/dpdk/spdk_pid89323 00:27:34.258 Removing: /var/run/dpdk/spdk_pid89654 00:27:34.258 Removing: /var/run/dpdk/spdk_pid90336 00:27:34.258 Removing: /var/run/dpdk/spdk_pid90611 00:27:34.258 Removing: /var/run/dpdk/spdk_pid90673 00:27:34.258 Removing: /var/run/dpdk/spdk_pid90715 00:27:34.258 Removing: /var/run/dpdk/spdk_pid90968 00:27:34.258 Removing: /var/run/dpdk/spdk_pid91143 00:27:34.258 Removing: /var/run/dpdk/spdk_pid91237 00:27:34.516 Removing: /var/run/dpdk/spdk_pid91337 00:27:34.516 Removing: /var/run/dpdk/spdk_pid91390 00:27:34.516 Removing: /var/run/dpdk/spdk_pid91420 00:27:34.517 Clean 00:27:34.517 15:00:04 -- common/autotest_common.sh@1451 -- # return 0 00:27:34.517 15:00:04 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:27:34.517 15:00:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.517 15:00:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.517 15:00:04 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:27:34.517 15:00:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.517 15:00:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.517 15:00:04 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:34.517 15:00:04 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:34.517 15:00:04 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:34.517 15:00:04 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:27:34.517 15:00:04 -- spdk/autotest.sh@394 -- # hostname 00:27:34.517 15:00:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:34.776 geninfo: WARNING: invalid characters removed from testname! 00:28:01.326 15:00:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:04.611 15:00:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:07.893 15:00:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:10.422 15:00:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:12.950 15:00:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:16.239 15:00:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:19.520 15:00:48 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:19.520 15:00:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:28:19.520 15:00:48 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:28:19.520 15:00:48 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:19.520 15:00:48 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:28:19.520 15:00:48 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:19.520 + [[ -n 5265 ]] 00:28:19.520 + sudo kill 5265 00:28:19.529 [Pipeline] } 00:28:19.544 [Pipeline] // timeout 00:28:19.549 [Pipeline] } 00:28:19.563 [Pipeline] // stage 00:28:19.567 [Pipeline] } 00:28:19.579 [Pipeline] // catchError 00:28:19.587 [Pipeline] stage 00:28:19.589 [Pipeline] { (Stop VM) 00:28:19.599 [Pipeline] sh 00:28:19.882 + vagrant halt 00:28:24.066 ==> default: Halting domain... 00:28:30.635 [Pipeline] sh 00:28:30.913 + vagrant destroy -f 00:28:34.225 ==> default: Removing domain... 00:28:34.236 [Pipeline] sh 00:28:34.516 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:28:34.524 [Pipeline] } 00:28:34.538 [Pipeline] // stage 00:28:34.543 [Pipeline] } 00:28:34.556 [Pipeline] // dir 00:28:34.560 [Pipeline] } 00:28:34.573 [Pipeline] // wrap 00:28:34.578 [Pipeline] } 00:28:34.590 [Pipeline] // catchError 00:28:34.598 [Pipeline] stage 00:28:34.600 [Pipeline] { (Epilogue) 00:28:34.611 [Pipeline] sh 00:28:34.893 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:41.465 [Pipeline] catchError 00:28:41.467 [Pipeline] { 00:28:41.479 [Pipeline] sh 00:28:41.764 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:42.022 Artifacts sizes are good 00:28:42.030 [Pipeline] } 00:28:42.044 [Pipeline] // catchError 00:28:42.056 [Pipeline] archiveArtifacts 00:28:42.063 Archiving artifacts 00:28:42.166 [Pipeline] cleanWs 00:28:42.177 [WS-CLEANUP] Deleting project workspace... 00:28:42.177 [WS-CLEANUP] Deferred wipeout is used... 00:28:42.183 [WS-CLEANUP] done 00:28:42.185 [Pipeline] } 00:28:42.201 [Pipeline] // stage 00:28:42.206 [Pipeline] } 00:28:42.219 [Pipeline] // node 00:28:42.224 [Pipeline] End of Pipeline 00:28:42.259 Finished: SUCCESS